00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3655 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3257 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.050 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.051 The recommended git tool is: git 00:00:00.051 using credential 00000000-0000-0000-0000-000000000002 00:00:00.055 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.082 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.114 Using shallow fetch with depth 1 00:00:00.114 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.114 > git --version # timeout=10 00:00:00.152 > git --version # 'git version 2.39.2' 00:00:00.152 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.189 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.189 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.603 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.615 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.626 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:03.626 > git config core.sparsecheckout # timeout=10 00:00:03.637 > git read-tree -mu HEAD # timeout=10 00:00:03.655 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:03.675 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:03.676 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:03.762 [Pipeline] Start of Pipeline 00:00:03.781 [Pipeline] library 00:00:03.783 Loading library shm_lib@master 00:00:03.783 Library shm_lib@master is cached. Copying from home. 00:00:03.797 [Pipeline] node 00:00:03.803 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:03.805 [Pipeline] { 00:00:03.813 [Pipeline] catchError 00:00:03.814 [Pipeline] { 00:00:03.824 [Pipeline] wrap 00:00:03.832 [Pipeline] { 00:00:03.838 [Pipeline] stage 00:00:03.839 [Pipeline] { (Prologue) 00:00:03.853 [Pipeline] echo 00:00:03.854 Node: VM-host-SM9 00:00:03.858 [Pipeline] cleanWs 00:00:03.866 [WS-CLEANUP] Deleting project workspace... 00:00:03.866 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.871 [WS-CLEANUP] done 00:00:04.063 [Pipeline] setCustomBuildProperty 00:00:04.167 [Pipeline] httpRequest 00:00:04.187 [Pipeline] echo 00:00:04.188 Sorcerer 10.211.164.101 is alive 00:00:04.195 [Pipeline] httpRequest 00:00:04.198 HttpMethod: GET 00:00:04.199 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:04.199 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:04.200 Response Code: HTTP/1.1 200 OK 00:00:04.201 Success: Status code 200 is in the accepted range: 200,404 00:00:04.201 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:04.651 [Pipeline] sh 00:00:04.922 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:04.933 [Pipeline] httpRequest 00:00:04.946 [Pipeline] echo 00:00:04.947 Sorcerer 10.211.164.101 is alive 00:00:04.954 [Pipeline] httpRequest 00:00:04.957 HttpMethod: GET 00:00:04.957 URL: http://10.211.164.101/packages/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:04.958 Sending request to url: http://10.211.164.101/packages/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:04.959 Response Code: HTTP/1.1 200 OK 00:00:04.959 Success: Status code 200 is in the accepted range: 200,404 00:00:04.959 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:23.149 [Pipeline] sh 00:00:23.427 + tar --no-same-owner -xf spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:26.720 [Pipeline] sh 00:00:27.010 + git -C spdk log --oneline -n5 00:00:27.010 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:27.010 6c7c1f57e accel: add sequence outstanding stat 00:00:27.010 3bc8e6a26 accel: add utility to put task 00:00:27.010 2dba73997 accel: move get task utility 00:00:27.010 e45c8090e accel: improve accel sequence obj release 00:00:27.070 [Pipeline] withCredentials 00:00:27.081 > git --version # timeout=10 00:00:27.093 > git --version # 'git version 2.39.2' 00:00:27.107 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:27.109 [Pipeline] { 00:00:27.144 [Pipeline] retry 00:00:27.146 [Pipeline] { 00:00:27.165 [Pipeline] sh 00:00:27.443 + git ls-remote http://dpdk.org/git/dpdk main 00:00:28.828 [Pipeline] } 00:00:28.841 [Pipeline] // retry 00:00:28.844 [Pipeline] } 00:00:28.858 [Pipeline] // withCredentials 00:00:28.864 [Pipeline] httpRequest 00:00:28.889 [Pipeline] echo 00:00:28.890 Sorcerer 10.211.164.101 is alive 00:00:28.898 [Pipeline] httpRequest 00:00:28.902 HttpMethod: GET 00:00:28.903 URL: http://10.211.164.101/packages/dpdk_830d7c98d6b2746401142050a88ff5cbc3465ba7.tar.gz 00:00:28.903 Sending request to url: http://10.211.164.101/packages/dpdk_830d7c98d6b2746401142050a88ff5cbc3465ba7.tar.gz 00:00:28.916 Response Code: HTTP/1.1 200 OK 00:00:28.916 Success: Status code 200 is in the accepted range: 200,404 00:00:28.917 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_830d7c98d6b2746401142050a88ff5cbc3465ba7.tar.gz 00:00:45.839 [Pipeline] sh 00:00:46.119 + tar --no-same-owner -xf dpdk_830d7c98d6b2746401142050a88ff5cbc3465ba7.tar.gz 00:00:47.503 [Pipeline] sh 00:00:47.785 + git -C dpdk log --oneline -n5 00:00:47.785 830d7c98d6 eal/x86: improve 16 bytes constant memcpy 00:00:47.785 8e8aa4a4f9 devtools: check that maintainers are listed in mailmap 00:00:47.785 ec8fb5696c examples/vm_power_manager: remove use of EAL logtype 00:00:47.785 13830b98b2 examples/l2fwd-keepalive: use dedicated logtype 00:00:47.785 8570d76c64 app/testpmd: use dedicated log macro instead of EAL logtype 00:00:47.803 [Pipeline] writeFile 00:00:47.821 [Pipeline] sh 00:00:48.152 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:48.164 [Pipeline] sh 00:00:48.441 + cat autorun-spdk.conf 00:00:48.441 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.441 SPDK_TEST_NVMF=1 00:00:48.441 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.441 SPDK_TEST_USDT=1 00:00:48.441 SPDK_RUN_UBSAN=1 00:00:48.441 SPDK_TEST_NVMF_MDNS=1 00:00:48.441 NET_TYPE=virt 00:00:48.441 SPDK_JSONRPC_GO_CLIENT=1 00:00:48.441 SPDK_TEST_NATIVE_DPDK=main 00:00:48.441 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:48.441 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:48.448 RUN_NIGHTLY=1 00:00:48.451 [Pipeline] } 00:00:48.488 [Pipeline] // stage 00:00:48.498 [Pipeline] stage 00:00:48.499 [Pipeline] { (Run VM) 00:00:48.508 [Pipeline] sh 00:00:48.780 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:48.780 + echo 'Start stage prepare_nvme.sh' 00:00:48.780 Start stage prepare_nvme.sh 00:00:48.780 + [[ -n 1 ]] 00:00:48.780 + disk_prefix=ex1 00:00:48.780 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:00:48.780 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:00:48.780 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:00:48.780 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.780 ++ SPDK_TEST_NVMF=1 00:00:48.780 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.780 ++ SPDK_TEST_USDT=1 00:00:48.780 ++ SPDK_RUN_UBSAN=1 00:00:48.780 ++ SPDK_TEST_NVMF_MDNS=1 00:00:48.780 ++ NET_TYPE=virt 00:00:48.780 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:48.780 ++ SPDK_TEST_NATIVE_DPDK=main 00:00:48.780 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:48.780 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:48.780 ++ RUN_NIGHTLY=1 00:00:48.780 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:48.780 + nvme_files=() 00:00:48.780 + declare -A nvme_files 00:00:48.780 + backend_dir=/var/lib/libvirt/images/backends 00:00:48.780 + nvme_files['nvme.img']=5G 00:00:48.780 + nvme_files['nvme-cmb.img']=5G 00:00:48.780 + nvme_files['nvme-multi0.img']=4G 00:00:48.780 + nvme_files['nvme-multi1.img']=4G 00:00:48.780 + nvme_files['nvme-multi2.img']=4G 00:00:48.780 + nvme_files['nvme-openstack.img']=8G 00:00:48.780 + nvme_files['nvme-zns.img']=5G 00:00:48.780 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:48.780 + (( SPDK_TEST_FTL == 1 )) 00:00:48.780 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:48.780 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:48.780 + for nvme in "${!nvme_files[@]}" 00:00:48.780 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:48.780 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:48.780 + for nvme in "${!nvme_files[@]}" 00:00:48.780 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:48.780 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:48.780 + for nvme in "${!nvme_files[@]}" 00:00:48.780 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:48.780 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:48.780 + for nvme in "${!nvme_files[@]}" 00:00:48.780 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:49.037 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:49.037 + for nvme in "${!nvme_files[@]}" 00:00:49.037 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:49.037 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:49.037 + for nvme in "${!nvme_files[@]}" 00:00:49.037 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:49.037 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:49.037 + for nvme in "${!nvme_files[@]}" 00:00:49.037 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:49.295 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:49.295 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:49.295 + echo 'End stage prepare_nvme.sh' 00:00:49.295 End stage prepare_nvme.sh 00:00:49.309 [Pipeline] sh 00:00:49.590 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:49.590 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora38 00:00:49.590 00:00:49.590 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:00:49.590 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:00:49.590 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:49.590 HELP=0 00:00:49.590 DRY_RUN=0 00:00:49.590 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:49.590 NVME_DISKS_TYPE=nvme,nvme, 00:00:49.590 NVME_AUTO_CREATE=0 00:00:49.590 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:49.590 NVME_CMB=,, 00:00:49.590 NVME_PMR=,, 00:00:49.590 NVME_ZNS=,, 00:00:49.590 NVME_MS=,, 00:00:49.590 NVME_FDP=,, 00:00:49.590 SPDK_VAGRANT_DISTRO=fedora38 00:00:49.590 SPDK_VAGRANT_VMCPU=10 00:00:49.590 SPDK_VAGRANT_VMRAM=12288 00:00:49.590 SPDK_VAGRANT_PROVIDER=libvirt 00:00:49.590 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:49.590 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:49.590 SPDK_OPENSTACK_NETWORK=0 00:00:49.590 VAGRANT_PACKAGE_BOX=0 00:00:49.590 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:49.590 FORCE_DISTRO=true 00:00:49.590 VAGRANT_BOX_VERSION= 00:00:49.591 EXTRA_VAGRANTFILES= 00:00:49.591 NIC_MODEL=e1000 00:00:49.591 00:00:49.591 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:00:49.591 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:52.874 Bringing machine 'default' up with 'libvirt' provider... 00:00:53.440 ==> default: Creating image (snapshot of base box volume). 00:00:53.699 ==> default: Creating domain with the following settings... 00:00:53.700 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720621145_b1ab6eb2870aa915e86c 00:00:53.700 ==> default: -- Domain type: kvm 00:00:53.700 ==> default: -- Cpus: 10 00:00:53.700 ==> default: -- Feature: acpi 00:00:53.700 ==> default: -- Feature: apic 00:00:53.700 ==> default: -- Feature: pae 00:00:53.700 ==> default: -- Memory: 12288M 00:00:53.700 ==> default: -- Memory Backing: hugepages: 00:00:53.700 ==> default: -- Management MAC: 00:00:53.700 ==> default: -- Loader: 00:00:53.700 ==> default: -- Nvram: 00:00:53.700 ==> default: -- Base box: spdk/fedora38 00:00:53.700 ==> default: -- Storage pool: default 00:00:53.700 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720621145_b1ab6eb2870aa915e86c.img (20G) 00:00:53.700 ==> default: -- Volume Cache: default 00:00:53.700 ==> default: -- Kernel: 00:00:53.700 ==> default: -- Initrd: 00:00:53.700 ==> default: -- Graphics Type: vnc 00:00:53.700 ==> default: -- Graphics Port: -1 00:00:53.700 ==> default: -- Graphics IP: 127.0.0.1 00:00:53.700 ==> default: -- Graphics Password: Not defined 00:00:53.700 ==> default: -- Video Type: cirrus 00:00:53.700 ==> default: -- Video VRAM: 9216 00:00:53.700 ==> default: -- Sound Type: 00:00:53.700 ==> default: -- Keymap: en-us 00:00:53.700 ==> default: -- TPM Path: 00:00:53.700 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:53.700 ==> default: -- Command line args: 00:00:53.700 ==> default: -> value=-device, 00:00:53.700 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:53.700 ==> default: -> value=-drive, 00:00:53.700 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:53.700 ==> default: -> value=-device, 00:00:53.700 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.700 ==> default: -> value=-device, 00:00:53.700 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:53.700 ==> default: -> value=-drive, 00:00:53.700 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:53.700 ==> default: -> value=-device, 00:00:53.700 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.700 ==> default: -> value=-drive, 00:00:53.700 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:53.700 ==> default: -> value=-device, 00:00:53.700 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.700 ==> default: -> value=-drive, 00:00:53.700 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:53.700 ==> default: -> value=-device, 00:00:53.700 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.700 ==> default: Creating shared folders metadata... 00:00:53.700 ==> default: Starting domain. 00:00:55.076 ==> default: Waiting for domain to get an IP address... 00:01:13.150 ==> default: Waiting for SSH to become available... 00:01:13.150 ==> default: Configuring and enabling network interfaces... 00:01:16.430 default: SSH address: 192.168.121.82:22 00:01:16.430 default: SSH username: vagrant 00:01:16.430 default: SSH auth method: private key 00:01:18.328 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:26.439 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:31.698 ==> default: Mounting SSHFS shared folder... 00:01:32.630 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:32.630 ==> default: Checking Mount.. 00:01:34.004 ==> default: Folder Successfully Mounted! 00:01:34.004 ==> default: Running provisioner: file... 00:01:34.568 default: ~/.gitconfig => .gitconfig 00:01:35.191 00:01:35.191 SUCCESS! 00:01:35.191 00:01:35.191 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:35.191 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:35.191 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:35.191 00:01:35.200 [Pipeline] } 00:01:35.216 [Pipeline] // stage 00:01:35.225 [Pipeline] dir 00:01:35.225 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:35.227 [Pipeline] { 00:01:35.239 [Pipeline] catchError 00:01:35.241 [Pipeline] { 00:01:35.254 [Pipeline] sh 00:01:35.527 + vagrant ssh-config --host vagrant 00:01:35.527 + sed -ne /^Host/,$p 00:01:35.527 + tee ssh_conf 00:01:39.707 Host vagrant 00:01:39.707 HostName 192.168.121.82 00:01:39.707 User vagrant 00:01:39.707 Port 22 00:01:39.707 UserKnownHostsFile /dev/null 00:01:39.707 StrictHostKeyChecking no 00:01:39.707 PasswordAuthentication no 00:01:39.707 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:39.707 IdentitiesOnly yes 00:01:39.707 LogLevel FATAL 00:01:39.707 ForwardAgent yes 00:01:39.707 ForwardX11 yes 00:01:39.707 00:01:39.719 [Pipeline] withEnv 00:01:39.721 [Pipeline] { 00:01:39.737 [Pipeline] sh 00:01:40.015 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:40.015 source /etc/os-release 00:01:40.015 [[ -e /image.version ]] && img=$(< /image.version) 00:01:40.015 # Minimal, systemd-like check. 00:01:40.015 if [[ -e /.dockerenv ]]; then 00:01:40.015 # Clear garbage from the node's name: 00:01:40.015 # agt-er_autotest_547-896 -> autotest_547-896 00:01:40.015 # $HOSTNAME is the actual container id 00:01:40.015 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:40.015 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:40.015 # We can assume this is a mount from a host where container is running, 00:01:40.015 # so fetch its hostname to easily identify the target swarm worker. 00:01:40.015 container="$(< /etc/hostname) ($agent)" 00:01:40.015 else 00:01:40.015 # Fallback 00:01:40.015 container=$agent 00:01:40.015 fi 00:01:40.015 fi 00:01:40.015 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:40.015 00:01:40.040 [Pipeline] } 00:01:40.059 [Pipeline] // withEnv 00:01:40.068 [Pipeline] setCustomBuildProperty 00:01:40.083 [Pipeline] stage 00:01:40.085 [Pipeline] { (Tests) 00:01:40.107 [Pipeline] sh 00:01:40.391 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:40.660 [Pipeline] sh 00:01:40.935 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:40.950 [Pipeline] timeout 00:01:40.950 Timeout set to expire in 40 min 00:01:40.952 [Pipeline] { 00:01:40.967 [Pipeline] sh 00:01:41.244 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:41.809 HEAD is now at 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:41.822 [Pipeline] sh 00:01:42.094 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:42.365 [Pipeline] sh 00:01:42.643 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:42.979 [Pipeline] sh 00:01:43.256 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:43.256 ++ readlink -f spdk_repo 00:01:43.256 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:43.256 + [[ -n /home/vagrant/spdk_repo ]] 00:01:43.256 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:43.256 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:43.256 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:43.256 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:43.256 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:43.256 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:43.256 + cd /home/vagrant/spdk_repo 00:01:43.256 + source /etc/os-release 00:01:43.256 ++ NAME='Fedora Linux' 00:01:43.256 ++ VERSION='38 (Cloud Edition)' 00:01:43.256 ++ ID=fedora 00:01:43.256 ++ VERSION_ID=38 00:01:43.256 ++ VERSION_CODENAME= 00:01:43.256 ++ PLATFORM_ID=platform:f38 00:01:43.256 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:43.256 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:43.256 ++ LOGO=fedora-logo-icon 00:01:43.256 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:43.256 ++ HOME_URL=https://fedoraproject.org/ 00:01:43.256 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:43.256 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:43.256 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:43.256 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:43.256 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:43.256 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:43.256 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:43.256 ++ SUPPORT_END=2024-05-14 00:01:43.256 ++ VARIANT='Cloud Edition' 00:01:43.256 ++ VARIANT_ID=cloud 00:01:43.256 + uname -a 00:01:43.256 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:43.256 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:43.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:43.822 Hugepages 00:01:43.822 node hugesize free / total 00:01:43.822 node0 1048576kB 0 / 0 00:01:43.822 node0 2048kB 0 / 0 00:01:43.822 00:01:43.822 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:43.822 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:43.822 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:43.822 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:43.822 + rm -f /tmp/spdk-ld-path 00:01:43.822 + source autorun-spdk.conf 00:01:43.822 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.822 ++ SPDK_TEST_NVMF=1 00:01:43.822 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.822 ++ SPDK_TEST_USDT=1 00:01:43.822 ++ SPDK_RUN_UBSAN=1 00:01:43.822 ++ SPDK_TEST_NVMF_MDNS=1 00:01:43.822 ++ NET_TYPE=virt 00:01:43.822 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:43.822 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:43.822 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:43.822 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:43.822 ++ RUN_NIGHTLY=1 00:01:43.822 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:43.822 + [[ -n '' ]] 00:01:43.822 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:43.822 + for M in /var/spdk/build-*-manifest.txt 00:01:43.822 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:43.822 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:43.822 + for M in /var/spdk/build-*-manifest.txt 00:01:43.822 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:43.822 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:43.822 ++ uname 00:01:44.080 + [[ Linux == \L\i\n\u\x ]] 00:01:44.080 + sudo dmesg -T 00:01:44.080 + sudo dmesg --clear 00:01:44.080 + dmesg_pid=5900 00:01:44.080 + sudo dmesg -Tw 00:01:44.080 + [[ Fedora Linux == FreeBSD ]] 00:01:44.080 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.080 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.080 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:44.080 + [[ -x /usr/src/fio-static/fio ]] 00:01:44.080 + export FIO_BIN=/usr/src/fio-static/fio 00:01:44.080 + FIO_BIN=/usr/src/fio-static/fio 00:01:44.080 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:44.080 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:44.080 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:44.080 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.080 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.080 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:44.080 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.080 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.080 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:44.080 Test configuration: 00:01:44.080 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.080 SPDK_TEST_NVMF=1 00:01:44.080 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:44.080 SPDK_TEST_USDT=1 00:01:44.080 SPDK_RUN_UBSAN=1 00:01:44.080 SPDK_TEST_NVMF_MDNS=1 00:01:44.080 NET_TYPE=virt 00:01:44.080 SPDK_JSONRPC_GO_CLIENT=1 00:01:44.080 SPDK_TEST_NATIVE_DPDK=main 00:01:44.080 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:44.080 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:44.080 RUN_NIGHTLY=1 14:19:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:44.080 14:19:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:44.080 14:19:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.080 14:19:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.080 14:19:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.080 14:19:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.080 14:19:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.080 14:19:56 -- paths/export.sh@5 -- $ export PATH 00:01:44.080 14:19:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.080 14:19:56 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:44.080 14:19:56 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:44.080 14:19:56 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720621196.XXXXXX 00:01:44.080 14:19:56 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720621196.KrabBd 00:01:44.080 14:19:56 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:44.080 14:19:56 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:01:44.080 14:19:56 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:44.080 14:19:56 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:44.080 14:19:56 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:44.080 14:19:56 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:44.080 14:19:56 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:44.080 14:19:56 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:44.080 14:19:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.080 14:19:56 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:01:44.080 14:19:56 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:44.080 14:19:56 -- pm/common@17 -- $ local monitor 00:01:44.080 14:19:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.080 14:19:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.080 14:19:56 -- pm/common@21 -- $ date +%s 00:01:44.080 14:19:56 -- pm/common@25 -- $ sleep 1 00:01:44.080 14:19:56 -- pm/common@21 -- $ date +%s 00:01:44.080 14:19:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720621196 00:01:44.080 14:19:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720621196 00:01:44.080 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720621196_collect-vmstat.pm.log 00:01:44.080 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720621196_collect-cpu-load.pm.log 00:01:45.014 14:19:57 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:45.014 14:19:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:45.014 14:19:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:45.014 14:19:57 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:45.014 14:19:57 -- spdk/autobuild.sh@16 -- $ date -u 00:01:45.272 Wed Jul 10 02:19:57 PM UTC 2024 00:01:45.272 14:19:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:45.272 v24.09-pre-200-g9937c0160 00:01:45.272 14:19:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:45.272 14:19:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:45.272 14:19:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:45.272 14:19:57 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:45.272 14:19:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:45.272 14:19:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.272 ************************************ 00:01:45.272 START TEST ubsan 00:01:45.272 ************************************ 00:01:45.272 using ubsan 00:01:45.272 14:19:57 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:45.272 00:01:45.272 real 0m0.000s 00:01:45.272 user 0m0.000s 00:01:45.272 sys 0m0.000s 00:01:45.272 14:19:57 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:45.272 ************************************ 00:01:45.272 END TEST ubsan 00:01:45.272 ************************************ 00:01:45.272 14:19:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:45.272 14:19:57 -- common/autotest_common.sh@1142 -- $ return 0 00:01:45.272 14:19:57 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:45.272 14:19:57 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:45.272 14:19:57 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:45.272 14:19:57 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:45.272 14:19:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:45.272 14:19:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.272 ************************************ 00:01:45.272 START TEST build_native_dpdk 00:01:45.272 ************************************ 00:01:45.272 14:19:57 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:45.272 14:19:57 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:45.273 830d7c98d6 eal/x86: improve 16 bytes constant memcpy 00:01:45.273 8e8aa4a4f9 devtools: check that maintainers are listed in mailmap 00:01:45.273 ec8fb5696c examples/vm_power_manager: remove use of EAL logtype 00:01:45.273 13830b98b2 examples/l2fwd-keepalive: use dedicated logtype 00:01:45.273 8570d76c64 app/testpmd: use dedicated log macro instead of EAL logtype 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc1 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc1 21.11.0 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc1 '<' 21.11.0 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:45.273 14:19:57 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:45.273 patching file config/rte_config.h 00:01:45.273 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:45.273 14:19:57 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:50.547 The Meson build system 00:01:50.547 Version: 1.3.1 00:01:50.547 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:50.547 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:50.547 Build type: native build 00:01:50.547 Program cat found: YES (/usr/bin/cat) 00:01:50.547 Project name: DPDK 00:01:50.547 Project version: 24.07.0-rc1 00:01:50.547 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:50.547 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:50.547 Host machine cpu family: x86_64 00:01:50.547 Host machine cpu: x86_64 00:01:50.547 Message: ## Building in Developer Mode ## 00:01:50.547 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:50.547 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:50.547 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:50.547 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:50.547 Program cat found: YES (/usr/bin/cat) 00:01:50.547 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:50.547 Compiler for C supports arguments -march=native: YES 00:01:50.547 Checking for size of "void *" : 8 00:01:50.547 Checking for size of "void *" : 8 (cached) 00:01:50.547 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:50.547 Library m found: YES 00:01:50.547 Library numa found: YES 00:01:50.547 Has header "numaif.h" : YES 00:01:50.547 Library fdt found: NO 00:01:50.547 Library execinfo found: NO 00:01:50.547 Has header "execinfo.h" : YES 00:01:50.547 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:50.547 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:50.547 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:50.547 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:50.547 Run-time dependency openssl found: YES 3.0.9 00:01:50.547 Run-time dependency libpcap found: YES 1.10.4 00:01:50.547 Has header "pcap.h" with dependency libpcap: YES 00:01:50.547 Compiler for C supports arguments -Wcast-qual: YES 00:01:50.547 Compiler for C supports arguments -Wdeprecated: YES 00:01:50.547 Compiler for C supports arguments -Wformat: YES 00:01:50.547 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:50.547 Compiler for C supports arguments -Wformat-security: NO 00:01:50.547 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.547 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:50.547 Compiler for C supports arguments -Wnested-externs: YES 00:01:50.547 Compiler for C supports arguments -Wold-style-definition: YES 00:01:50.547 Compiler for C supports arguments -Wpointer-arith: YES 00:01:50.547 Compiler for C supports arguments -Wsign-compare: YES 00:01:50.547 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:50.547 Compiler for C supports arguments -Wundef: YES 00:01:50.547 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.547 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:50.547 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:50.547 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.547 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:50.547 Program objdump found: YES (/usr/bin/objdump) 00:01:50.547 Compiler for C supports arguments -mavx512f: YES 00:01:50.547 Checking if "AVX512 checking" compiles: YES 00:01:50.547 Fetching value of define "__SSE4_2__" : 1 00:01:50.547 Fetching value of define "__AES__" : 1 00:01:50.547 Fetching value of define "__AVX__" : 1 00:01:50.547 Fetching value of define "__AVX2__" : 1 00:01:50.547 Fetching value of define "__AVX512BW__" : (undefined) 00:01:50.547 Fetching value of define "__AVX512CD__" : (undefined) 00:01:50.547 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:50.547 Fetching value of define "__AVX512F__" : (undefined) 00:01:50.547 Fetching value of define "__AVX512VL__" : (undefined) 00:01:50.547 Fetching value of define "__PCLMUL__" : 1 00:01:50.547 Fetching value of define "__RDRND__" : 1 00:01:50.547 Fetching value of define "__RDSEED__" : 1 00:01:50.547 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:50.547 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:50.547 Message: lib/log: Defining dependency "log" 00:01:50.547 Message: lib/kvargs: Defining dependency "kvargs" 00:01:50.547 Message: lib/argparse: Defining dependency "argparse" 00:01:50.547 Message: lib/telemetry: Defining dependency "telemetry" 00:01:50.547 Checking for function "getentropy" : NO 00:01:50.547 Message: lib/eal: Defining dependency "eal" 00:01:50.547 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:50.547 Message: lib/ring: Defining dependency "ring" 00:01:50.547 Message: lib/rcu: Defining dependency "rcu" 00:01:50.547 Message: lib/mempool: Defining dependency "mempool" 00:01:50.547 Message: lib/mbuf: Defining dependency "mbuf" 00:01:50.547 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:50.547 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:50.547 Compiler for C supports arguments -mpclmul: YES 00:01:50.547 Compiler for C supports arguments -maes: YES 00:01:50.547 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.547 Compiler for C supports arguments -mavx512bw: YES 00:01:50.547 Compiler for C supports arguments -mavx512dq: YES 00:01:50.547 Compiler for C supports arguments -mavx512vl: YES 00:01:50.547 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:50.547 Compiler for C supports arguments -mavx2: YES 00:01:50.547 Compiler for C supports arguments -mavx: YES 00:01:50.547 Message: lib/net: Defining dependency "net" 00:01:50.547 Message: lib/meter: Defining dependency "meter" 00:01:50.547 Message: lib/ethdev: Defining dependency "ethdev" 00:01:50.547 Message: lib/pci: Defining dependency "pci" 00:01:50.547 Message: lib/cmdline: Defining dependency "cmdline" 00:01:50.547 Message: lib/metrics: Defining dependency "metrics" 00:01:50.547 Message: lib/hash: Defining dependency "hash" 00:01:50.547 Message: lib/timer: Defining dependency "timer" 00:01:50.547 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:50.547 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:50.547 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:50.547 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:50.547 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:50.547 Message: lib/acl: Defining dependency "acl" 00:01:50.547 Message: lib/bbdev: Defining dependency "bbdev" 00:01:50.547 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:50.547 Run-time dependency libelf found: YES 0.190 00:01:50.547 Message: lib/bpf: Defining dependency "bpf" 00:01:50.547 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:50.547 Message: lib/compressdev: Defining dependency "compressdev" 00:01:50.547 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:50.547 Message: lib/distributor: Defining dependency "distributor" 00:01:50.547 Message: lib/dmadev: Defining dependency "dmadev" 00:01:50.547 Message: lib/efd: Defining dependency "efd" 00:01:50.547 Message: lib/eventdev: Defining dependency "eventdev" 00:01:50.547 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:50.547 Message: lib/gpudev: Defining dependency "gpudev" 00:01:50.547 Message: lib/gro: Defining dependency "gro" 00:01:50.547 Message: lib/gso: Defining dependency "gso" 00:01:50.547 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:50.547 Message: lib/jobstats: Defining dependency "jobstats" 00:01:50.547 Message: lib/latencystats: Defining dependency "latencystats" 00:01:50.548 Message: lib/lpm: Defining dependency "lpm" 00:01:50.548 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:50.548 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:50.548 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:50.548 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:50.548 Message: lib/member: Defining dependency "member" 00:01:50.548 Message: lib/pcapng: Defining dependency "pcapng" 00:01:50.548 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:50.548 Message: lib/power: Defining dependency "power" 00:01:50.548 Message: lib/rawdev: Defining dependency "rawdev" 00:01:50.548 Message: lib/regexdev: Defining dependency "regexdev" 00:01:50.548 Message: lib/mldev: Defining dependency "mldev" 00:01:50.548 Message: lib/rib: Defining dependency "rib" 00:01:50.548 Message: lib/reorder: Defining dependency "reorder" 00:01:50.548 Message: lib/sched: Defining dependency "sched" 00:01:50.548 Message: lib/security: Defining dependency "security" 00:01:50.548 Message: lib/stack: Defining dependency "stack" 00:01:50.548 Has header "linux/userfaultfd.h" : YES 00:01:50.548 Has header "linux/vduse.h" : YES 00:01:50.548 Message: lib/vhost: Defining dependency "vhost" 00:01:50.548 Message: lib/ipsec: Defining dependency "ipsec" 00:01:50.548 Message: lib/pdcp: Defining dependency "pdcp" 00:01:50.548 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:50.548 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:50.548 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:50.548 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:50.548 Message: lib/fib: Defining dependency "fib" 00:01:50.548 Message: lib/port: Defining dependency "port" 00:01:50.548 Message: lib/pdump: Defining dependency "pdump" 00:01:50.548 Message: lib/table: Defining dependency "table" 00:01:50.548 Message: lib/pipeline: Defining dependency "pipeline" 00:01:50.548 Message: lib/graph: Defining dependency "graph" 00:01:50.548 Message: lib/node: Defining dependency "node" 00:01:50.548 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:52.447 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:52.447 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:52.447 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:52.447 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:52.447 Compiler for C supports arguments -Wno-unused-value: YES 00:01:52.447 Compiler for C supports arguments -Wno-format: YES 00:01:52.447 Compiler for C supports arguments -Wno-format-security: YES 00:01:52.447 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:52.447 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:52.447 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:52.447 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:52.447 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:52.447 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:52.447 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:52.447 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:52.447 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:52.447 Has header "sys/epoll.h" : YES 00:01:52.447 Program doxygen found: YES (/usr/bin/doxygen) 00:01:52.447 Configuring doxy-api-html.conf using configuration 00:01:52.447 Configuring doxy-api-man.conf using configuration 00:01:52.447 Program mandb found: YES (/usr/bin/mandb) 00:01:52.447 Program sphinx-build found: NO 00:01:52.447 Configuring rte_build_config.h using configuration 00:01:52.447 Message: 00:01:52.447 ================= 00:01:52.447 Applications Enabled 00:01:52.447 ================= 00:01:52.447 00:01:52.447 apps: 00:01:52.447 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:52.447 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:52.447 test-pmd, test-regex, test-sad, test-security-perf, 00:01:52.447 00:01:52.447 Message: 00:01:52.447 ================= 00:01:52.447 Libraries Enabled 00:01:52.447 ================= 00:01:52.447 00:01:52.447 libs: 00:01:52.447 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:52.447 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:52.447 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:52.447 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:52.447 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:52.447 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:52.447 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:52.447 graph, node, 00:01:52.447 00:01:52.448 Message: 00:01:52.448 =============== 00:01:52.448 Drivers Enabled 00:01:52.448 =============== 00:01:52.448 00:01:52.448 common: 00:01:52.448 00:01:52.448 bus: 00:01:52.448 pci, vdev, 00:01:52.448 mempool: 00:01:52.448 ring, 00:01:52.448 dma: 00:01:52.448 00:01:52.448 net: 00:01:52.448 i40e, 00:01:52.448 raw: 00:01:52.448 00:01:52.448 crypto: 00:01:52.448 00:01:52.448 compress: 00:01:52.448 00:01:52.448 regex: 00:01:52.448 00:01:52.448 ml: 00:01:52.448 00:01:52.448 vdpa: 00:01:52.448 00:01:52.448 event: 00:01:52.448 00:01:52.448 baseband: 00:01:52.448 00:01:52.448 gpu: 00:01:52.448 00:01:52.448 00:01:52.448 Message: 00:01:52.448 ================= 00:01:52.448 Content Skipped 00:01:52.448 ================= 00:01:52.448 00:01:52.448 apps: 00:01:52.448 00:01:52.448 libs: 00:01:52.448 00:01:52.448 drivers: 00:01:52.448 common/cpt: not in enabled drivers build config 00:01:52.448 common/dpaax: not in enabled drivers build config 00:01:52.448 common/iavf: not in enabled drivers build config 00:01:52.448 common/idpf: not in enabled drivers build config 00:01:52.448 common/ionic: not in enabled drivers build config 00:01:52.448 common/mvep: not in enabled drivers build config 00:01:52.448 common/octeontx: not in enabled drivers build config 00:01:52.448 bus/auxiliary: not in enabled drivers build config 00:01:52.448 bus/cdx: not in enabled drivers build config 00:01:52.448 bus/dpaa: not in enabled drivers build config 00:01:52.448 bus/fslmc: not in enabled drivers build config 00:01:52.448 bus/ifpga: not in enabled drivers build config 00:01:52.448 bus/platform: not in enabled drivers build config 00:01:52.448 bus/uacce: not in enabled drivers build config 00:01:52.448 bus/vmbus: not in enabled drivers build config 00:01:52.448 common/cnxk: not in enabled drivers build config 00:01:52.448 common/mlx5: not in enabled drivers build config 00:01:52.448 common/nfp: not in enabled drivers build config 00:01:52.448 common/nitrox: not in enabled drivers build config 00:01:52.448 common/qat: not in enabled drivers build config 00:01:52.448 common/sfc_efx: not in enabled drivers build config 00:01:52.448 mempool/bucket: not in enabled drivers build config 00:01:52.448 mempool/cnxk: not in enabled drivers build config 00:01:52.448 mempool/dpaa: not in enabled drivers build config 00:01:52.448 mempool/dpaa2: not in enabled drivers build config 00:01:52.448 mempool/octeontx: not in enabled drivers build config 00:01:52.448 mempool/stack: not in enabled drivers build config 00:01:52.448 dma/cnxk: not in enabled drivers build config 00:01:52.448 dma/dpaa: not in enabled drivers build config 00:01:52.448 dma/dpaa2: not in enabled drivers build config 00:01:52.448 dma/hisilicon: not in enabled drivers build config 00:01:52.448 dma/idxd: not in enabled drivers build config 00:01:52.448 dma/ioat: not in enabled drivers build config 00:01:52.448 dma/odm: not in enabled drivers build config 00:01:52.448 dma/skeleton: not in enabled drivers build config 00:01:52.448 net/af_packet: not in enabled drivers build config 00:01:52.448 net/af_xdp: not in enabled drivers build config 00:01:52.448 net/ark: not in enabled drivers build config 00:01:52.448 net/atlantic: not in enabled drivers build config 00:01:52.448 net/avp: not in enabled drivers build config 00:01:52.448 net/axgbe: not in enabled drivers build config 00:01:52.448 net/bnx2x: not in enabled drivers build config 00:01:52.448 net/bnxt: not in enabled drivers build config 00:01:52.448 net/bonding: not in enabled drivers build config 00:01:52.448 net/cnxk: not in enabled drivers build config 00:01:52.448 net/cpfl: not in enabled drivers build config 00:01:52.448 net/cxgbe: not in enabled drivers build config 00:01:52.448 net/dpaa: not in enabled drivers build config 00:01:52.448 net/dpaa2: not in enabled drivers build config 00:01:52.448 net/e1000: not in enabled drivers build config 00:01:52.448 net/ena: not in enabled drivers build config 00:01:52.448 net/enetc: not in enabled drivers build config 00:01:52.448 net/enetfec: not in enabled drivers build config 00:01:52.448 net/enic: not in enabled drivers build config 00:01:52.448 net/failsafe: not in enabled drivers build config 00:01:52.448 net/fm10k: not in enabled drivers build config 00:01:52.448 net/gve: not in enabled drivers build config 00:01:52.448 net/hinic: not in enabled drivers build config 00:01:52.448 net/hns3: not in enabled drivers build config 00:01:52.448 net/iavf: not in enabled drivers build config 00:01:52.448 net/ice: not in enabled drivers build config 00:01:52.448 net/idpf: not in enabled drivers build config 00:01:52.448 net/igc: not in enabled drivers build config 00:01:52.448 net/ionic: not in enabled drivers build config 00:01:52.448 net/ipn3ke: not in enabled drivers build config 00:01:52.448 net/ixgbe: not in enabled drivers build config 00:01:52.448 net/mana: not in enabled drivers build config 00:01:52.448 net/memif: not in enabled drivers build config 00:01:52.448 net/mlx4: not in enabled drivers build config 00:01:52.448 net/mlx5: not in enabled drivers build config 00:01:52.448 net/mvneta: not in enabled drivers build config 00:01:52.448 net/mvpp2: not in enabled drivers build config 00:01:52.448 net/netvsc: not in enabled drivers build config 00:01:52.448 net/nfb: not in enabled drivers build config 00:01:52.448 net/nfp: not in enabled drivers build config 00:01:52.448 net/ngbe: not in enabled drivers build config 00:01:52.448 net/null: not in enabled drivers build config 00:01:52.448 net/octeontx: not in enabled drivers build config 00:01:52.448 net/octeon_ep: not in enabled drivers build config 00:01:52.448 net/pcap: not in enabled drivers build config 00:01:52.448 net/pfe: not in enabled drivers build config 00:01:52.448 net/qede: not in enabled drivers build config 00:01:52.448 net/ring: not in enabled drivers build config 00:01:52.448 net/sfc: not in enabled drivers build config 00:01:52.448 net/softnic: not in enabled drivers build config 00:01:52.448 net/tap: not in enabled drivers build config 00:01:52.448 net/thunderx: not in enabled drivers build config 00:01:52.448 net/txgbe: not in enabled drivers build config 00:01:52.448 net/vdev_netvsc: not in enabled drivers build config 00:01:52.448 net/vhost: not in enabled drivers build config 00:01:52.448 net/virtio: not in enabled drivers build config 00:01:52.448 net/vmxnet3: not in enabled drivers build config 00:01:52.448 raw/cnxk_bphy: not in enabled drivers build config 00:01:52.448 raw/cnxk_gpio: not in enabled drivers build config 00:01:52.448 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:52.448 raw/ifpga: not in enabled drivers build config 00:01:52.448 raw/ntb: not in enabled drivers build config 00:01:52.448 raw/skeleton: not in enabled drivers build config 00:01:52.448 crypto/armv8: not in enabled drivers build config 00:01:52.448 crypto/bcmfs: not in enabled drivers build config 00:01:52.448 crypto/caam_jr: not in enabled drivers build config 00:01:52.448 crypto/ccp: not in enabled drivers build config 00:01:52.448 crypto/cnxk: not in enabled drivers build config 00:01:52.448 crypto/dpaa_sec: not in enabled drivers build config 00:01:52.448 crypto/dpaa2_sec: not in enabled drivers build config 00:01:52.448 crypto/ionic: not in enabled drivers build config 00:01:52.448 crypto/ipsec_mb: not in enabled drivers build config 00:01:52.448 crypto/mlx5: not in enabled drivers build config 00:01:52.448 crypto/mvsam: not in enabled drivers build config 00:01:52.448 crypto/nitrox: not in enabled drivers build config 00:01:52.448 crypto/null: not in enabled drivers build config 00:01:52.448 crypto/octeontx: not in enabled drivers build config 00:01:52.448 crypto/openssl: not in enabled drivers build config 00:01:52.448 crypto/scheduler: not in enabled drivers build config 00:01:52.448 crypto/uadk: not in enabled drivers build config 00:01:52.448 crypto/virtio: not in enabled drivers build config 00:01:52.448 compress/isal: not in enabled drivers build config 00:01:52.448 compress/mlx5: not in enabled drivers build config 00:01:52.448 compress/nitrox: not in enabled drivers build config 00:01:52.448 compress/octeontx: not in enabled drivers build config 00:01:52.448 compress/uadk: not in enabled drivers build config 00:01:52.448 compress/zlib: not in enabled drivers build config 00:01:52.448 regex/mlx5: not in enabled drivers build config 00:01:52.448 regex/cn9k: not in enabled drivers build config 00:01:52.448 ml/cnxk: not in enabled drivers build config 00:01:52.448 vdpa/ifc: not in enabled drivers build config 00:01:52.448 vdpa/mlx5: not in enabled drivers build config 00:01:52.448 vdpa/nfp: not in enabled drivers build config 00:01:52.448 vdpa/sfc: not in enabled drivers build config 00:01:52.448 event/cnxk: not in enabled drivers build config 00:01:52.448 event/dlb2: not in enabled drivers build config 00:01:52.448 event/dpaa: not in enabled drivers build config 00:01:52.448 event/dpaa2: not in enabled drivers build config 00:01:52.448 event/dsw: not in enabled drivers build config 00:01:52.448 event/opdl: not in enabled drivers build config 00:01:52.448 event/skeleton: not in enabled drivers build config 00:01:52.448 event/sw: not in enabled drivers build config 00:01:52.448 event/octeontx: not in enabled drivers build config 00:01:52.448 baseband/acc: not in enabled drivers build config 00:01:52.448 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:52.448 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:52.448 baseband/la12xx: not in enabled drivers build config 00:01:52.448 baseband/null: not in enabled drivers build config 00:01:52.448 baseband/turbo_sw: not in enabled drivers build config 00:01:52.448 gpu/cuda: not in enabled drivers build config 00:01:52.448 00:01:52.448 00:01:52.448 Build targets in project: 224 00:01:52.448 00:01:52.448 DPDK 24.07.0-rc1 00:01:52.448 00:01:52.448 User defined options 00:01:52.448 libdir : lib 00:01:52.448 prefix : /home/vagrant/spdk_repo/dpdk/build 00:01:52.448 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:52.448 c_link_args : 00:01:52.448 enable_docs : false 00:01:52.448 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:52.448 enable_kmods : false 00:01:52.448 machine : native 00:01:52.448 tests : false 00:01:52.448 00:01:52.448 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.448 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:52.448 14:20:04 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:01:52.448 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:01:52.448 [1/722] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:52.448 [2/722] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:52.448 [3/722] Linking static target lib/librte_kvargs.a 00:01:52.449 [4/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:52.449 [5/722] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:52.449 [6/722] Linking static target lib/librte_log.a 00:01:52.707 [7/722] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:52.707 [8/722] Linking static target lib/librte_argparse.a 00:01:52.707 [9/722] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.965 [10/722] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.965 [11/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:52.965 [12/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:52.965 [13/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.965 [14/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.965 [15/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:52.965 [16/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:52.965 [17/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:52.965 [18/722] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.223 [19/722] Linking target lib/librte_log.so.24.2 00:01:53.223 [20/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:53.481 [21/722] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:53.481 [22/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:53.481 [23/722] Linking target lib/librte_kvargs.so.24.2 00:01:53.481 [24/722] Linking target lib/librte_argparse.so.24.2 00:01:53.481 [25/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:53.738 [26/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:53.738 [27/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:53.738 [28/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:53.738 [29/722] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:53.738 [30/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:53.738 [31/722] Linking static target lib/librte_telemetry.a 00:01:53.738 [32/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:53.738 [33/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:53.996 [34/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:53.996 [35/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:53.996 [36/722] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.253 [37/722] Linking target lib/librte_telemetry.so.24.2 00:01:54.253 [38/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:54.253 [39/722] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:54.253 [40/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:54.253 [41/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:54.253 [42/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:54.253 [43/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:54.510 [44/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:54.510 [45/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:54.510 [46/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:54.511 [47/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:54.511 [48/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:54.768 [49/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.026 [50/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:55.026 [51/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:55.026 [52/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:55.026 [53/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.283 [54/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:55.283 [55/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:55.283 [56/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:55.283 [57/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.542 [58/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:55.542 [59/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.542 [60/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:55.542 [61/722] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:55.801 [62/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:55.801 [63/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:55.801 [64/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:55.801 [65/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:55.801 [66/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:55.801 [67/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:55.801 [68/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:56.059 [69/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:56.059 [70/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:56.059 [71/722] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:56.317 [72/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:56.575 [73/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:56.575 [74/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:56.575 [75/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:56.575 [76/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:56.575 [77/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:56.575 [78/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:56.575 [79/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:56.575 [80/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:56.833 [81/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:56.833 [82/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:56.833 [83/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:57.091 [84/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:57.091 [85/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:57.091 [86/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:57.349 [87/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:57.349 [88/722] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:57.349 [89/722] Linking static target lib/librte_ring.a 00:01:57.607 [90/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:57.607 [91/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:57.607 [92/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:57.607 [93/722] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.607 [94/722] Linking static target lib/librte_eal.a 00:01:57.864 [95/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:57.864 [96/722] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:57.864 [97/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:57.864 [98/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:57.864 [99/722] Linking static target lib/librte_mempool.a 00:01:58.121 [100/722] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:58.121 [101/722] Linking static target lib/librte_rcu.a 00:01:58.121 [102/722] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.121 [103/722] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.378 [104/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:58.378 [105/722] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.378 [106/722] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.636 [107/722] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.636 [108/722] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.636 [109/722] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.636 [110/722] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.636 [111/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:58.636 [112/722] Linking static target lib/librte_mbuf.a 00:01:58.895 [113/722] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.895 [114/722] Linking static target lib/librte_net.a 00:01:58.895 [115/722] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:59.156 [116/722] Linking static target lib/librte_meter.a 00:01:59.156 [117/722] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.156 [118/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:59.156 [119/722] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.156 [120/722] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.413 [121/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:59.413 [122/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:59.413 [123/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:59.976 [124/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:00.233 [125/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:00.233 [126/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:00.492 [127/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:00.749 [128/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:01.007 [129/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:01.007 [130/722] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:01.007 [131/722] Linking static target lib/librte_pci.a 00:02:01.007 [132/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:01.264 [133/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:01.264 [134/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:01.264 [135/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:01.265 [136/722] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.265 [137/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:01.522 [138/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:01.522 [139/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:01.522 [140/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:01.522 [141/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:01.522 [142/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:01.780 [143/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:01.780 [144/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:01.780 [145/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:01.780 [146/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:02.038 [147/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:02.038 [148/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:02.295 [149/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:02.295 [150/722] Linking static target lib/librte_cmdline.a 00:02:02.553 [151/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:02.810 [152/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:02.810 [153/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:02.810 [154/722] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:02.810 [155/722] Linking static target lib/librte_metrics.a 00:02:03.104 [156/722] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:03.378 [157/722] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.379 [158/722] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.636 [159/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:03.893 [160/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:04.459 [161/722] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:04.459 [162/722] Linking static target lib/librte_timer.a 00:02:05.021 [163/722] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:05.021 [164/722] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:05.021 [165/722] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.021 [166/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:05.952 [167/722] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:05.952 [168/722] Linking static target lib/librte_bitratestats.a 00:02:05.952 [169/722] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:05.952 [170/722] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.952 [171/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:05.952 [172/722] Linking static target lib/librte_ethdev.a 00:02:06.294 [173/722] Linking target lib/librte_eal.so.24.2 00:02:06.294 [174/722] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.294 [175/722] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:06.552 [176/722] Linking target lib/librte_ring.so.24.2 00:02:06.552 [177/722] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:06.552 [178/722] Linking target lib/librte_rcu.so.24.2 00:02:06.809 [179/722] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:06.809 [180/722] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:06.809 [181/722] Linking target lib/librte_meter.so.24.2 00:02:06.809 [182/722] Linking target lib/librte_mempool.so.24.2 00:02:06.809 [183/722] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:06.809 [184/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:06.810 [185/722] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:06.810 [186/722] Linking static target lib/librte_bbdev.a 00:02:07.067 [187/722] Linking target lib/librte_pci.so.24.2 00:02:07.067 [188/722] Linking target lib/librte_timer.so.24.2 00:02:07.067 [189/722] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:07.067 [190/722] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:07.067 [191/722] Linking static target lib/librte_hash.a 00:02:07.067 [192/722] Linking target lib/librte_mbuf.so.24.2 00:02:07.067 [193/722] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:07.067 [194/722] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:07.326 [195/722] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:07.326 [196/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:07.326 [197/722] Linking target lib/librte_net.so.24.2 00:02:07.585 [198/722] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:07.585 [199/722] Linking target lib/librte_cmdline.so.24.2 00:02:07.849 [200/722] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.849 [201/722] Linking target lib/librte_bbdev.so.24.2 00:02:07.849 [202/722] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:07.849 [203/722] Linking static target lib/acl/libavx2_tmp.a 00:02:08.107 [204/722] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.107 [205/722] Linking target lib/librte_hash.so.24.2 00:02:08.107 [206/722] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:08.107 [207/722] Linking static target lib/acl/libavx512_tmp.a 00:02:08.107 [208/722] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:08.365 [209/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:08.365 [210/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:08.365 [211/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:08.365 [212/722] Linking static target lib/librte_acl.a 00:02:08.624 [213/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:08.624 [214/722] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.881 [215/722] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:08.881 [216/722] Linking static target lib/librte_cfgfile.a 00:02:08.881 [217/722] Linking target lib/librte_acl.so.24.2 00:02:08.881 [218/722] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:02:09.139 [219/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:09.397 [220/722] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.397 [221/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:09.397 [222/722] Linking target lib/librte_cfgfile.so.24.2 00:02:09.397 [223/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:09.397 [224/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:09.655 [225/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:09.913 [226/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:10.170 [227/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:10.170 [228/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:10.170 [229/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:10.170 [230/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.427 [231/722] Linking static target lib/librte_compressdev.a 00:02:10.427 [232/722] Linking static target lib/librte_bpf.a 00:02:10.684 [233/722] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.684 [234/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:10.684 [235/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:10.941 [236/722] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.941 [237/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:10.941 [238/722] Linking static target lib/librte_distributor.a 00:02:10.941 [239/722] Linking target lib/librte_compressdev.so.24.2 00:02:11.198 [240/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:11.457 [241/722] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.457 [242/722] Linking target lib/librte_distributor.so.24.2 00:02:11.457 [243/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:11.740 [244/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:11.740 [245/722] Linking static target lib/librte_dmadev.a 00:02:12.326 [246/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:12.326 [247/722] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.326 [248/722] Linking target lib/librte_dmadev.so.24.2 00:02:12.583 [249/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:12.583 [250/722] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:13.148 [251/722] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:13.148 [252/722] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.148 [253/722] Linking static target lib/librte_efd.a 00:02:13.148 [254/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:13.148 [255/722] Linking target lib/librte_ethdev.so.24.2 00:02:13.406 [256/722] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:13.406 [257/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:13.406 [258/722] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.406 [259/722] Linking target lib/librte_metrics.so.24.2 00:02:13.406 [260/722] Linking target lib/librte_efd.so.24.2 00:02:13.406 [261/722] Linking target lib/librte_bpf.so.24.2 00:02:13.664 [262/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:13.664 [263/722] Linking static target lib/librte_cryptodev.a 00:02:13.664 [264/722] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:13.664 [265/722] Linking target lib/librte_bitratestats.so.24.2 00:02:13.664 [266/722] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:13.922 [267/722] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:13.922 [268/722] Linking static target lib/librte_dispatcher.a 00:02:14.180 [269/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:14.438 [270/722] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.695 [271/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:14.695 [272/722] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:14.695 [273/722] Linking static target lib/librte_gpudev.a 00:02:14.953 [274/722] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:15.211 [275/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:15.211 [276/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:15.469 [277/722] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.469 [278/722] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:15.469 [279/722] Linking target lib/librte_cryptodev.so.24.2 00:02:15.726 [280/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:15.726 [281/722] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:15.726 [282/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:15.984 [283/722] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.255 [284/722] Linking target lib/librte_gpudev.so.24.2 00:02:16.255 [285/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:16.255 [286/722] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:16.255 [287/722] Linking static target lib/librte_gro.a 00:02:16.255 [288/722] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:16.514 [289/722] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.514 [290/722] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:16.514 [291/722] Linking target lib/librte_gro.so.24.2 00:02:16.514 [292/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:16.514 [293/722] Linking static target lib/librte_eventdev.a 00:02:16.514 [294/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:16.772 [295/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:16.772 [296/722] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:17.030 [297/722] Linking static target lib/librte_gso.a 00:02:17.287 [298/722] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.287 [299/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:17.287 [300/722] Linking target lib/librte_gso.so.24.2 00:02:17.287 [301/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:17.545 [302/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:17.545 [303/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:17.545 [304/722] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:17.804 [305/722] Linking static target lib/librte_jobstats.a 00:02:18.063 [306/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:18.063 [307/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:18.063 [308/722] Linking static target lib/librte_ip_frag.a 00:02:18.321 [309/722] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:18.321 [310/722] Linking static target lib/librte_latencystats.a 00:02:18.321 [311/722] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.321 [312/722] Linking target lib/librte_jobstats.so.24.2 00:02:18.321 [313/722] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:18.321 [314/722] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:18.578 [315/722] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.578 [316/722] Linking target lib/librte_ip_frag.so.24.2 00:02:18.578 [317/722] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.578 [318/722] Linking target lib/librte_latencystats.so.24.2 00:02:18.578 [319/722] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:18.578 [320/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:18.836 [321/722] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:02:18.836 [322/722] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:18.836 [323/722] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.094 [324/722] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.661 [325/722] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:19.661 [326/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:19.661 [327/722] Linking static target lib/librte_lpm.a 00:02:19.661 [328/722] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.919 [329/722] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:19.919 [330/722] Linking static target lib/librte_pcapng.a 00:02:19.919 [331/722] Linking target lib/librte_eventdev.so.24.2 00:02:19.919 [332/722] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:20.178 [333/722] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:20.178 [334/722] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:20.178 [335/722] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.178 [336/722] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:02:20.178 [337/722] Linking target lib/librte_lpm.so.24.2 00:02:20.178 [338/722] Linking target lib/librte_dispatcher.so.24.2 00:02:20.178 [339/722] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.178 [340/722] Linking target lib/librte_pcapng.so.24.2 00:02:20.436 [341/722] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:20.436 [342/722] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:02:20.436 [343/722] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:20.436 [344/722] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:02:21.039 [345/722] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:21.039 [346/722] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:21.297 [347/722] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.297 [348/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:21.555 [349/722] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:21.555 [350/722] Linking static target lib/librte_power.a 00:02:21.555 [351/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:21.813 [352/722] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:21.813 [353/722] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:21.813 [354/722] Linking static target lib/librte_rawdev.a 00:02:21.813 [355/722] Linking static target lib/librte_regexdev.a 00:02:21.813 [356/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:22.378 [357/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:22.379 [358/722] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:22.379 [359/722] Linking static target lib/librte_member.a 00:02:22.379 [360/722] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.379 [361/722] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.379 [362/722] Linking target lib/librte_rawdev.so.24.2 00:02:22.379 [363/722] Linking target lib/librte_power.so.24.2 00:02:22.636 [364/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:22.636 [365/722] Linking static target lib/librte_mldev.a 00:02:22.636 [366/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:22.636 [367/722] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:22.636 [368/722] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.894 [369/722] Linking target lib/librte_member.so.24.2 00:02:22.894 [370/722] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.894 [371/722] Linking static target lib/librte_reorder.a 00:02:22.894 [372/722] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.894 [373/722] Linking target lib/librte_regexdev.so.24.2 00:02:23.151 [374/722] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:23.409 [375/722] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.409 [376/722] Linking target lib/librte_reorder.so.24.2 00:02:23.409 [377/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:23.409 [378/722] Linking static target lib/librte_rib.a 00:02:23.409 [379/722] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:23.666 [380/722] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:02:23.666 [381/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:23.666 [382/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:23.666 [383/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:23.666 [384/722] Linking static target lib/librte_stack.a 00:02:24.233 [385/722] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.233 [386/722] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.233 [387/722] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.233 [388/722] Linking static target lib/librte_security.a 00:02:24.233 [389/722] Linking target lib/librte_rib.so.24.2 00:02:24.233 [390/722] Linking target lib/librte_stack.so.24.2 00:02:24.233 [391/722] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:02:24.491 [392/722] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.491 [393/722] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.749 [394/722] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.749 [395/722] Linking target lib/librte_security.so.24.2 00:02:24.749 [396/722] Linking target lib/librte_mldev.so.24.2 00:02:24.749 [397/722] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:24.749 [398/722] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:02:24.749 [399/722] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:24.749 [400/722] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:24.749 [401/722] Linking static target lib/librte_sched.a 00:02:25.315 [402/722] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.315 [403/722] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.572 [404/722] Linking target lib/librte_sched.so.24.2 00:02:25.572 [405/722] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:02:25.830 [406/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:26.397 [407/722] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:26.397 [408/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:26.655 [409/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:26.655 [410/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:26.655 [411/722] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:27.587 [412/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:27.587 [413/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:27.845 [414/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:27.845 [415/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:27.845 [416/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:28.102 [417/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:28.102 [418/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:28.102 [419/722] Linking static target lib/librte_ipsec.a 00:02:28.359 [420/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:28.617 [421/722] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.617 [422/722] Linking target lib/librte_ipsec.so.24.2 00:02:28.875 [423/722] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:02:28.875 [424/722] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:28.875 [425/722] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:28.875 [426/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:29.132 [427/722] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:29.132 [428/722] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:29.132 [429/722] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:29.132 [430/722] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:29.132 [431/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:30.068 [432/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:30.068 [433/722] Linking static target lib/librte_pdcp.a 00:02:30.327 [434/722] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.586 [435/722] Linking target lib/librte_pdcp.so.24.2 00:02:30.586 [436/722] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:30.586 [437/722] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:30.586 [438/722] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:30.586 [439/722] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:30.843 [440/722] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:30.843 [441/722] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:30.843 [442/722] Linking static target lib/librte_fib.a 00:02:31.409 [443/722] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.409 [444/722] Linking target lib/librte_fib.so.24.2 00:02:31.667 [445/722] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:32.232 [446/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:32.232 [447/722] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:32.232 [448/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:32.232 [449/722] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:32.490 [450/722] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:32.490 [451/722] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:33.056 [452/722] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:33.056 [453/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:33.315 [454/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:33.573 [455/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:33.573 [456/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:33.573 [457/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:33.573 [458/722] Linking static target lib/librte_port.a 00:02:33.833 [459/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:34.091 [460/722] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:34.348 [461/722] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:34.348 [462/722] Linking static target lib/librte_pdump.a 00:02:34.348 [463/722] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.348 [464/722] Linking target lib/librte_port.so.24.2 00:02:34.605 [465/722] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:34.605 [466/722] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:02:34.605 [467/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:34.605 [468/722] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.605 [469/722] Linking target lib/librte_pdump.so.24.2 00:02:35.537 [470/722] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:35.537 [471/722] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:35.537 [472/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:35.537 [473/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:35.537 [474/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:35.537 [475/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:36.102 [476/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:36.360 [477/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:36.360 [478/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:36.617 [479/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:36.617 [480/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:36.875 [481/722] Linking static target lib/librte_table.a 00:02:37.133 [482/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:37.699 [483/722] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:37.957 [484/722] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.957 [485/722] Linking target lib/librte_table.so.24.2 00:02:37.957 [486/722] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:37.957 [487/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:38.215 [488/722] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:02:38.215 [489/722] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:38.474 [490/722] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:39.409 [491/722] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:39.409 [492/722] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:39.409 [493/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:39.409 [494/722] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:39.667 [495/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:40.234 [496/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:40.234 [497/722] Linking static target lib/librte_graph.a 00:02:40.492 [498/722] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:40.492 [499/722] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:40.492 [500/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:40.750 [501/722] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:40.750 [502/722] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:41.008 [503/722] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.266 [504/722] Linking target lib/librte_graph.so.24.2 00:02:41.266 [505/722] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:02:41.523 [506/722] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:41.782 [507/722] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:42.039 [508/722] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:42.606 [509/722] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:42.606 [510/722] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:42.606 [511/722] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:42.606 [512/722] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:42.865 [513/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:42.865 [514/722] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:43.123 [515/722] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:43.381 [516/722] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:43.639 [517/722] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:43.897 [518/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:43.897 [519/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:44.155 [520/722] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:44.155 [521/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:44.155 [522/722] Linking static target lib/librte_node.a 00:02:44.155 [523/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:44.155 [524/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:44.723 [525/722] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.723 [526/722] Linking target lib/librte_node.so.24.2 00:02:44.723 [527/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.723 [528/722] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:45.288 [529/722] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:45.288 [530/722] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.288 [531/722] Linking static target drivers/librte_bus_vdev.a 00:02:45.288 [532/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:45.288 [533/722] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:45.546 [534/722] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:45.546 [535/722] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.546 [536/722] Linking static target drivers/librte_bus_pci.a 00:02:45.546 [537/722] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.546 [538/722] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.804 [539/722] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.804 [540/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:45.804 [541/722] Linking target drivers/librte_bus_vdev.so.24.2 00:02:45.804 [542/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:45.804 [543/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:46.062 [544/722] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:02:46.320 [545/722] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:46.320 [546/722] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:46.320 [547/722] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.320 [548/722] Linking target drivers/librte_bus_pci.so.24.2 00:02:46.578 [549/722] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:46.578 [550/722] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:46.578 [551/722] Linking static target drivers/librte_mempool_ring.a 00:02:46.578 [552/722] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:46.578 [553/722] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:02:46.578 [554/722] Linking target drivers/librte_mempool_ring.so.24.2 00:02:46.836 [555/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:47.401 [556/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:47.966 [557/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:48.224 [558/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:48.224 [559/722] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:48.789 [560/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:49.725 [561/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:49.725 [562/722] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:49.725 [563/722] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:49.982 [564/722] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:50.240 [565/722] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:50.240 [566/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:50.811 [567/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:50.811 [568/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:50.811 [569/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:51.075 [570/722] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:51.394 [571/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:51.677 [572/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:52.242 [573/722] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:52.242 [574/722] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:52.500 [575/722] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:52.500 [576/722] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:53.064 [577/722] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:53.629 [578/722] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:53.629 [579/722] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:53.629 [580/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:53.629 [581/722] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:53.887 [582/722] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:53.887 [583/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:54.144 [584/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:54.402 [585/722] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:54.660 [586/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:54.660 [587/722] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:54.660 [588/722] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:54.917 [589/722] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:54.917 [590/722] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:54.917 [591/722] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:55.175 [592/722] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:55.175 [593/722] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:55.432 [594/722] Linking static target drivers/librte_net_i40e.a 00:02:55.432 [595/722] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:55.432 [596/722] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:55.432 [597/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:55.690 [598/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:55.690 [599/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:55.947 [600/722] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:56.523 [601/722] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.523 [602/722] Linking target drivers/librte_net_i40e.so.24.2 00:02:56.523 [603/722] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:57.088 [604/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:57.088 [605/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:57.346 [606/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:57.346 [607/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:57.911 [608/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:57.911 [609/722] Linking static target lib/librte_vhost.a 00:02:58.168 [610/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:58.168 [611/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:58.168 [612/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:58.426 [613/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:58.426 [614/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:58.426 [615/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:58.991 [616/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:59.557 [617/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:59.557 [618/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:59.557 [619/722] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.815 [620/722] Linking target lib/librte_vhost.so.24.2 00:03:00.072 [621/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:00.072 [622/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:00.072 [623/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:00.072 [624/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:00.072 [625/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:00.330 [626/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:00.330 [627/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:00.587 [628/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:01.163 [629/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:01.163 [630/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:01.463 [631/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:01.463 [632/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:01.463 [633/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:01.721 [634/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:03.619 [635/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:03.619 [636/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:03.619 [637/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:03.619 [638/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:03.619 [639/722] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:03.619 [640/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:03.877 [641/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:03.877 [642/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:04.133 [643/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:04.391 [644/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:04.649 [645/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:04.649 [646/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:04.649 [647/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:04.649 [648/722] Linking static target lib/librte_pipeline.a 00:03:04.649 [649/722] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:04.907 [650/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:05.164 [651/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:05.422 [652/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:05.422 [653/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:05.680 [654/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:05.680 [655/722] Linking target app/dpdk-dumpcap 00:03:05.680 [656/722] Linking target app/dpdk-graph 00:03:05.680 [657/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:05.938 [658/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:06.196 [659/722] Linking target app/dpdk-pdump 00:03:06.196 [660/722] Linking target app/dpdk-test-cmdline 00:03:06.496 [661/722] Linking target app/dpdk-test-acl 00:03:06.496 [662/722] Linking target app/dpdk-test-compress-perf 00:03:06.496 [663/722] Linking target app/dpdk-proc-info 00:03:06.496 [664/722] Linking target app/dpdk-test-bbdev 00:03:06.496 [665/722] Linking target app/dpdk-test-crypto-perf 00:03:06.754 [666/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:06.754 [667/722] Linking target app/dpdk-test-dma-perf 00:03:07.011 [668/722] Linking target app/dpdk-test-gpudev 00:03:07.011 [669/722] Linking target app/dpdk-test-fib 00:03:07.011 [670/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:07.011 [671/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:07.269 [672/722] Linking target app/dpdk-test-flow-perf 00:03:07.527 [673/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:07.784 [674/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:07.784 [675/722] Linking target app/dpdk-test-eventdev 00:03:07.784 [676/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:08.040 [677/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:08.040 [678/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:08.040 [679/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:08.298 [680/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:08.555 [681/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:08.555 [682/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:08.555 [683/722] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:09.120 [684/722] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.120 [685/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:09.120 [686/722] Linking target lib/librte_pipeline.so.24.2 00:03:09.378 [687/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:09.944 [688/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:09.944 [689/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:10.202 [690/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:10.202 [691/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:10.460 [692/722] Linking target app/dpdk-test-pipeline 00:03:10.460 [693/722] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:11.029 [694/722] Linking target app/dpdk-test-mldev 00:03:11.286 [695/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:11.544 [696/722] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:11.544 [697/722] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:11.544 [698/722] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:11.544 [699/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:11.801 [700/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:12.059 [701/722] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:12.316 [702/722] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:12.317 [703/722] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:12.574 [704/722] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:12.574 [705/722] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:12.574 [706/722] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:13.139 [707/722] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:13.139 [708/722] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:13.396 [709/722] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:13.963 [710/722] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:13.963 [711/722] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:14.221 [712/722] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:14.221 [713/722] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:14.479 [714/722] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:14.479 [715/722] Linking target app/dpdk-test-regex 00:03:14.737 [716/722] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:14.995 [717/722] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:14.995 [718/722] Linking target app/dpdk-test-sad 00:03:15.253 [719/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:15.253 [720/722] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:15.511 [721/722] Linking target app/dpdk-test-security-perf 00:03:16.077 [722/722] Linking target app/dpdk-testpmd 00:03:16.077 14:21:28 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:03:16.077 14:21:28 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:16.077 14:21:28 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:16.077 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:16.077 [0/1] Installing files. 00:03:16.339 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:16.339 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:16.339 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.340 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.341 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:16.342 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.343 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.632 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.632 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.632 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:16.632 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:16.632 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:16.632 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:16.632 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.633 Installing lib/librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing lib/librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing drivers/librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:16.893 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing drivers/librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:16.893 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing drivers/librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:16.893 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.893 Installing drivers/librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:16.893 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.893 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.894 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.894 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.895 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.896 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.155 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:17.156 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:17.156 Installing symlink pointing to librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:17.156 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:17.156 Installing symlink pointing to librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:17.156 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:17.156 Installing symlink pointing to librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.24 00:03:17.156 Installing symlink pointing to librte_argparse.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:17.156 Installing symlink pointing to librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:17.156 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:17.156 Installing symlink pointing to librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:17.156 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:17.156 Installing symlink pointing to librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:17.156 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:17.156 Installing symlink pointing to librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:17.156 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:17.156 Installing symlink pointing to librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:17.156 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:17.156 Installing symlink pointing to librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:17.156 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:17.156 Installing symlink pointing to librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:17.156 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:17.156 Installing symlink pointing to librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:17.156 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:17.156 Installing symlink pointing to librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:17.156 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:17.156 Installing symlink pointing to librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:17.156 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:17.156 Installing symlink pointing to librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:17.156 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:17.156 Installing symlink pointing to librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:17.156 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:17.156 Installing symlink pointing to librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:17.156 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:17.156 Installing symlink pointing to librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:17.156 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:17.156 Installing symlink pointing to librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:17.156 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:17.156 Installing symlink pointing to librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:17.156 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:17.156 Installing symlink pointing to librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:17.156 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:17.156 Installing symlink pointing to librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:17.156 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:17.156 Installing symlink pointing to librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:17.156 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:17.156 Installing symlink pointing to librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:17.156 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:17.156 Installing symlink pointing to librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:17.156 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:17.156 Installing symlink pointing to librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:17.156 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:17.156 Installing symlink pointing to librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:17.156 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:17.156 Installing symlink pointing to librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:17.156 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:17.156 Installing symlink pointing to librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:17.156 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:17.156 Installing symlink pointing to librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:17.156 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:17.156 Installing symlink pointing to librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:17.156 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:17.156 Installing symlink pointing to librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:17.156 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:17.156 Installing symlink pointing to librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:17.156 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:17.156 Installing symlink pointing to librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:17.156 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:17.156 Installing symlink pointing to librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:17.156 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:17.156 Installing symlink pointing to librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:17.156 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:17.156 Installing symlink pointing to librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:17.156 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:17.156 Installing symlink pointing to librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:17.156 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:17.156 Installing symlink pointing to librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:17.156 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:03:17.156 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:03:17.156 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:03:17.156 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:03:17.156 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:03:17.156 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:03:17.156 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:03:17.156 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:03:17.156 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:03:17.156 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:03:17.156 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:03:17.157 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:03:17.157 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:17.157 Installing symlink pointing to librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:17.157 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:17.157 Installing symlink pointing to librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:17.157 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:17.157 Installing symlink pointing to librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:17.157 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:17.157 Installing symlink pointing to librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:17.157 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:17.157 Installing symlink pointing to librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:17.157 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:17.157 Installing symlink pointing to librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:17.157 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:17.157 Installing symlink pointing to librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:17.157 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:17.157 Installing symlink pointing to librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:17.157 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:17.157 Installing symlink pointing to librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:17.157 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:17.157 Installing symlink pointing to librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:17.157 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:17.157 Installing symlink pointing to librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:17.157 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:17.157 Installing symlink pointing to librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:17.157 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:17.157 Installing symlink pointing to librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:17.157 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:17.157 Installing symlink pointing to librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:17.157 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:17.157 Installing symlink pointing to librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:17.157 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:17.157 Installing symlink pointing to librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:17.157 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:17.157 Installing symlink pointing to librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:17.157 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:17.157 Installing symlink pointing to librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:17.157 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:17.157 Installing symlink pointing to librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:17.157 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:17.157 Installing symlink pointing to librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:03:17.157 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:03:17.157 Installing symlink pointing to librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:03:17.157 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:03:17.157 Installing symlink pointing to librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:03:17.157 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:03:17.157 Installing symlink pointing to librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:03:17.157 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:03:17.157 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:03:17.157 14:21:29 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:03:17.157 ************************************ 00:03:17.157 END TEST build_native_dpdk 00:03:17.157 ************************************ 00:03:17.157 14:21:29 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:17.157 00:03:17.157 real 1m31.911s 00:03:17.157 user 11m57.326s 00:03:17.157 sys 1m31.646s 00:03:17.157 14:21:29 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:17.157 14:21:29 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:17.157 14:21:29 -- common/autotest_common.sh@1142 -- $ return 0 00:03:17.157 14:21:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:17.157 14:21:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:17.157 14:21:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:17.157 14:21:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:17.157 14:21:29 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:17.157 14:21:29 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:17.157 14:21:29 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:17.157 14:21:29 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:19.057 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:19.057 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.057 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:19.057 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:19.622 Using 'verbs' RDMA provider 00:03:32.825 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:45.040 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:45.040 go version go1.21.1 linux/amd64 00:03:45.040 Creating mk/config.mk...done. 00:03:45.040 Creating mk/cc.flags.mk...done. 00:03:45.040 Type 'make' to build. 00:03:45.040 14:21:56 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:45.040 14:21:56 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:45.040 14:21:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:45.040 14:21:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.040 ************************************ 00:03:45.040 START TEST make 00:03:45.040 ************************************ 00:03:45.040 14:21:56 make -- common/autotest_common.sh@1123 -- $ make -j10 00:03:45.040 make[1]: Nothing to be done for 'all'. 00:04:23.766 CC lib/log/log.o 00:04:23.766 CC lib/log/log_flags.o 00:04:23.766 CC lib/ut_mock/mock.o 00:04:23.766 CC lib/log/log_deprecated.o 00:04:23.766 CC lib/ut/ut.o 00:04:23.766 LIB libspdk_ut_mock.a 00:04:23.766 LIB libspdk_log.a 00:04:23.766 LIB libspdk_ut.a 00:04:23.766 SO libspdk_ut_mock.so.6.0 00:04:23.766 SO libspdk_ut.so.2.0 00:04:23.766 SO libspdk_log.so.7.0 00:04:23.766 SYMLINK libspdk_ut_mock.so 00:04:23.766 SYMLINK libspdk_ut.so 00:04:23.766 SYMLINK libspdk_log.so 00:04:23.766 CC lib/dma/dma.o 00:04:23.766 CC lib/util/base64.o 00:04:23.766 CC lib/util/bit_array.o 00:04:23.766 CC lib/util/crc16.o 00:04:23.766 CC lib/util/cpuset.o 00:04:23.766 CXX lib/trace_parser/trace.o 00:04:23.766 CC lib/util/crc32.o 00:04:23.766 CC lib/util/crc32c.o 00:04:23.766 CC lib/ioat/ioat.o 00:04:23.766 CC lib/vfio_user/host/vfio_user_pci.o 00:04:23.766 CC lib/util/crc32_ieee.o 00:04:23.766 CC lib/util/crc64.o 00:04:23.766 CC lib/util/dif.o 00:04:23.766 LIB libspdk_dma.a 00:04:23.766 CC lib/util/fd.o 00:04:23.766 SO libspdk_dma.so.4.0 00:04:23.766 CC lib/util/file.o 00:04:23.766 CC lib/util/hexlify.o 00:04:23.766 SYMLINK libspdk_dma.so 00:04:23.766 CC lib/util/iov.o 00:04:23.766 CC lib/vfio_user/host/vfio_user.o 00:04:23.766 CC lib/util/math.o 00:04:23.766 CC lib/util/pipe.o 00:04:23.766 CC lib/util/strerror_tls.o 00:04:23.766 CC lib/util/string.o 00:04:23.766 CC lib/util/uuid.o 00:04:23.766 LIB libspdk_ioat.a 00:04:23.766 SO libspdk_ioat.so.7.0 00:04:23.766 CC lib/util/fd_group.o 00:04:23.766 CC lib/util/xor.o 00:04:23.766 SYMLINK libspdk_ioat.so 00:04:23.766 CC lib/util/zipf.o 00:04:23.766 LIB libspdk_vfio_user.a 00:04:23.766 SO libspdk_vfio_user.so.5.0 00:04:23.766 SYMLINK libspdk_vfio_user.so 00:04:23.766 LIB libspdk_util.a 00:04:23.766 SO libspdk_util.so.9.1 00:04:23.766 SYMLINK libspdk_util.so 00:04:23.766 LIB libspdk_trace_parser.a 00:04:23.766 SO libspdk_trace_parser.so.5.0 00:04:23.766 CC lib/vmd/vmd.o 00:04:23.766 CC lib/vmd/led.o 00:04:23.766 SYMLINK libspdk_trace_parser.so 00:04:23.766 CC lib/conf/conf.o 00:04:23.766 CC lib/json/json_parse.o 00:04:23.766 CC lib/json/json_util.o 00:04:23.766 CC lib/rdma_provider/common.o 00:04:23.766 CC lib/env_dpdk/env.o 00:04:23.766 CC lib/json/json_write.o 00:04:23.766 CC lib/idxd/idxd.o 00:04:23.766 CC lib/rdma_utils/rdma_utils.o 00:04:23.766 CC lib/env_dpdk/memory.o 00:04:23.766 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:23.766 LIB libspdk_conf.a 00:04:23.766 CC lib/idxd/idxd_user.o 00:04:23.766 SO libspdk_conf.so.6.0 00:04:23.766 CC lib/env_dpdk/pci.o 00:04:23.766 SYMLINK libspdk_conf.so 00:04:23.766 CC lib/env_dpdk/init.o 00:04:23.766 LIB libspdk_rdma_provider.a 00:04:23.766 SO libspdk_rdma_provider.so.6.0 00:04:23.766 LIB libspdk_json.a 00:04:23.766 LIB libspdk_rdma_utils.a 00:04:23.766 SYMLINK libspdk_rdma_provider.so 00:04:23.766 SO libspdk_json.so.6.0 00:04:23.766 CC lib/env_dpdk/threads.o 00:04:23.766 SO libspdk_rdma_utils.so.1.0 00:04:23.766 CC lib/idxd/idxd_kernel.o 00:04:23.766 SYMLINK libspdk_json.so 00:04:23.766 CC lib/env_dpdk/pci_ioat.o 00:04:23.766 SYMLINK libspdk_rdma_utils.so 00:04:23.766 CC lib/env_dpdk/pci_virtio.o 00:04:23.766 CC lib/env_dpdk/pci_vmd.o 00:04:23.766 CC lib/env_dpdk/pci_idxd.o 00:04:23.766 CC lib/env_dpdk/pci_event.o 00:04:23.766 CC lib/env_dpdk/sigbus_handler.o 00:04:23.766 CC lib/env_dpdk/pci_dpdk.o 00:04:23.766 LIB libspdk_idxd.a 00:04:23.766 CC lib/jsonrpc/jsonrpc_server.o 00:04:23.766 SO libspdk_idxd.so.12.0 00:04:23.766 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:23.766 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:23.766 LIB libspdk_vmd.a 00:04:23.766 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:23.766 SYMLINK libspdk_idxd.so 00:04:23.766 CC lib/jsonrpc/jsonrpc_client.o 00:04:23.766 SO libspdk_vmd.so.6.0 00:04:23.766 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:23.766 SYMLINK libspdk_vmd.so 00:04:23.766 LIB libspdk_jsonrpc.a 00:04:23.766 SO libspdk_jsonrpc.so.6.0 00:04:23.766 SYMLINK libspdk_jsonrpc.so 00:04:23.766 CC lib/rpc/rpc.o 00:04:23.766 LIB libspdk_env_dpdk.a 00:04:23.766 SO libspdk_env_dpdk.so.14.1 00:04:23.766 LIB libspdk_rpc.a 00:04:23.766 SO libspdk_rpc.so.6.0 00:04:23.766 SYMLINK libspdk_env_dpdk.so 00:04:23.766 SYMLINK libspdk_rpc.so 00:04:23.766 CC lib/notify/notify.o 00:04:23.766 CC lib/notify/notify_rpc.o 00:04:23.766 CC lib/trace/trace.o 00:04:23.766 CC lib/trace/trace_rpc.o 00:04:23.766 CC lib/trace/trace_flags.o 00:04:23.766 CC lib/keyring/keyring_rpc.o 00:04:23.766 CC lib/keyring/keyring.o 00:04:23.766 LIB libspdk_notify.a 00:04:23.766 SO libspdk_notify.so.6.0 00:04:23.766 LIB libspdk_trace.a 00:04:23.766 LIB libspdk_keyring.a 00:04:23.766 SYMLINK libspdk_notify.so 00:04:23.766 SO libspdk_keyring.so.1.0 00:04:23.766 SO libspdk_trace.so.10.0 00:04:23.766 SYMLINK libspdk_keyring.so 00:04:23.766 SYMLINK libspdk_trace.so 00:04:23.766 CC lib/sock/sock.o 00:04:23.766 CC lib/sock/sock_rpc.o 00:04:23.766 CC lib/thread/thread.o 00:04:23.766 CC lib/thread/iobuf.o 00:04:23.766 LIB libspdk_sock.a 00:04:23.766 SO libspdk_sock.so.10.0 00:04:23.766 SYMLINK libspdk_sock.so 00:04:23.766 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:23.766 CC lib/nvme/nvme_ctrlr.o 00:04:23.766 CC lib/nvme/nvme_fabric.o 00:04:23.766 CC lib/nvme/nvme_ns_cmd.o 00:04:23.766 CC lib/nvme/nvme_pcie_common.o 00:04:23.766 CC lib/nvme/nvme_ns.o 00:04:23.766 CC lib/nvme/nvme_pcie.o 00:04:23.766 CC lib/nvme/nvme_qpair.o 00:04:23.766 CC lib/nvme/nvme.o 00:04:24.024 CC lib/nvme/nvme_quirks.o 00:04:24.024 LIB libspdk_thread.a 00:04:24.024 SO libspdk_thread.so.10.1 00:04:24.291 CC lib/nvme/nvme_transport.o 00:04:24.291 CC lib/nvme/nvme_discovery.o 00:04:24.291 SYMLINK libspdk_thread.so 00:04:24.291 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:24.291 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:24.291 CC lib/nvme/nvme_tcp.o 00:04:24.549 CC lib/nvme/nvme_opal.o 00:04:24.549 CC lib/nvme/nvme_io_msg.o 00:04:24.549 CC lib/accel/accel.o 00:04:24.549 CC lib/nvme/nvme_poll_group.o 00:04:24.808 CC lib/nvme/nvme_zns.o 00:04:25.066 CC lib/blob/blobstore.o 00:04:25.066 CC lib/nvme/nvme_stubs.o 00:04:25.066 CC lib/init/json_config.o 00:04:25.066 CC lib/nvme/nvme_auth.o 00:04:25.324 CC lib/blob/request.o 00:04:25.324 CC lib/nvme/nvme_cuse.o 00:04:25.582 CC lib/init/subsystem.o 00:04:25.582 CC lib/nvme/nvme_rdma.o 00:04:25.582 CC lib/init/subsystem_rpc.o 00:04:25.841 CC lib/accel/accel_rpc.o 00:04:25.841 CC lib/init/rpc.o 00:04:25.841 CC lib/blob/zeroes.o 00:04:25.841 CC lib/virtio/virtio.o 00:04:25.841 CC lib/accel/accel_sw.o 00:04:25.841 CC lib/blob/blob_bs_dev.o 00:04:25.841 LIB libspdk_init.a 00:04:25.841 CC lib/virtio/virtio_vhost_user.o 00:04:25.841 CC lib/virtio/virtio_vfio_user.o 00:04:26.099 SO libspdk_init.so.5.0 00:04:26.099 SYMLINK libspdk_init.so 00:04:26.099 CC lib/virtio/virtio_pci.o 00:04:26.099 LIB libspdk_accel.a 00:04:26.099 SO libspdk_accel.so.15.1 00:04:26.357 SYMLINK libspdk_accel.so 00:04:26.357 CC lib/event/app.o 00:04:26.357 CC lib/event/reactor.o 00:04:26.357 CC lib/event/log_rpc.o 00:04:26.357 CC lib/event/scheduler_static.o 00:04:26.357 CC lib/event/app_rpc.o 00:04:26.357 LIB libspdk_virtio.a 00:04:26.357 SO libspdk_virtio.so.7.0 00:04:26.615 CC lib/bdev/bdev_rpc.o 00:04:26.615 CC lib/bdev/bdev.o 00:04:26.615 SYMLINK libspdk_virtio.so 00:04:26.615 CC lib/bdev/bdev_zone.o 00:04:26.615 CC lib/bdev/part.o 00:04:26.615 CC lib/bdev/scsi_nvme.o 00:04:26.873 LIB libspdk_event.a 00:04:26.873 SO libspdk_event.so.14.0 00:04:26.873 SYMLINK libspdk_event.so 00:04:27.131 LIB libspdk_nvme.a 00:04:27.390 SO libspdk_nvme.so.13.1 00:04:27.686 SYMLINK libspdk_nvme.so 00:04:27.978 LIB libspdk_blob.a 00:04:28.237 SO libspdk_blob.so.11.0 00:04:28.237 SYMLINK libspdk_blob.so 00:04:28.495 CC lib/lvol/lvol.o 00:04:28.495 CC lib/blobfs/blobfs.o 00:04:28.495 CC lib/blobfs/tree.o 00:04:29.430 LIB libspdk_bdev.a 00:04:29.430 LIB libspdk_blobfs.a 00:04:29.430 SO libspdk_bdev.so.15.1 00:04:29.430 SO libspdk_blobfs.so.10.0 00:04:29.430 SYMLINK libspdk_blobfs.so 00:04:29.430 SYMLINK libspdk_bdev.so 00:04:29.430 LIB libspdk_lvol.a 00:04:29.689 SO libspdk_lvol.so.10.0 00:04:29.689 SYMLINK libspdk_lvol.so 00:04:29.689 CC lib/nbd/nbd.o 00:04:29.689 CC lib/nbd/nbd_rpc.o 00:04:29.689 CC lib/ublk/ublk.o 00:04:29.689 CC lib/ublk/ublk_rpc.o 00:04:29.689 CC lib/nvmf/ctrlr.o 00:04:29.689 CC lib/nvmf/ctrlr_bdev.o 00:04:29.689 CC lib/nvmf/ctrlr_discovery.o 00:04:29.689 CC lib/scsi/dev.o 00:04:29.689 CC lib/nvmf/subsystem.o 00:04:29.689 CC lib/ftl/ftl_core.o 00:04:29.948 CC lib/ftl/ftl_init.o 00:04:29.948 CC lib/ftl/ftl_layout.o 00:04:30.206 CC lib/scsi/lun.o 00:04:30.206 LIB libspdk_nbd.a 00:04:30.206 CC lib/nvmf/nvmf.o 00:04:30.206 SO libspdk_nbd.so.7.0 00:04:30.464 SYMLINK libspdk_nbd.so 00:04:30.464 CC lib/ftl/ftl_debug.o 00:04:30.464 CC lib/ftl/ftl_io.o 00:04:30.464 CC lib/ftl/ftl_sb.o 00:04:30.464 CC lib/ftl/ftl_l2p.o 00:04:30.723 CC lib/scsi/port.o 00:04:30.723 LIB libspdk_ublk.a 00:04:30.723 CC lib/ftl/ftl_l2p_flat.o 00:04:30.723 CC lib/ftl/ftl_nv_cache.o 00:04:30.723 SO libspdk_ublk.so.3.0 00:04:30.980 CC lib/ftl/ftl_band.o 00:04:30.980 SYMLINK libspdk_ublk.so 00:04:30.980 CC lib/nvmf/nvmf_rpc.o 00:04:30.980 CC lib/nvmf/transport.o 00:04:30.980 CC lib/nvmf/tcp.o 00:04:30.980 CC lib/scsi/scsi.o 00:04:31.238 CC lib/nvmf/stubs.o 00:04:31.238 CC lib/nvmf/mdns_server.o 00:04:31.238 CC lib/scsi/scsi_bdev.o 00:04:31.497 CC lib/scsi/scsi_pr.o 00:04:31.756 CC lib/scsi/scsi_rpc.o 00:04:31.756 CC lib/ftl/ftl_band_ops.o 00:04:31.756 CC lib/scsi/task.o 00:04:31.756 CC lib/ftl/ftl_writer.o 00:04:31.756 CC lib/nvmf/rdma.o 00:04:31.756 CC lib/nvmf/auth.o 00:04:32.014 CC lib/ftl/ftl_rq.o 00:04:32.014 CC lib/ftl/ftl_reloc.o 00:04:32.014 CC lib/ftl/ftl_p2l.o 00:04:32.014 CC lib/ftl/ftl_l2p_cache.o 00:04:32.273 CC lib/ftl/mngt/ftl_mngt.o 00:04:32.273 LIB libspdk_scsi.a 00:04:32.273 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:32.273 SO libspdk_scsi.so.9.0 00:04:32.273 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:32.273 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:32.531 SYMLINK libspdk_scsi.so 00:04:32.531 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:32.531 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:32.531 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:32.531 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:32.531 CC lib/iscsi/conn.o 00:04:32.807 CC lib/vhost/vhost.o 00:04:32.807 CC lib/vhost/vhost_rpc.o 00:04:32.807 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:32.807 CC lib/vhost/vhost_scsi.o 00:04:32.807 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:33.074 CC lib/iscsi/init_grp.o 00:04:33.074 CC lib/iscsi/iscsi.o 00:04:33.074 CC lib/iscsi/md5.o 00:04:33.074 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:33.332 CC lib/iscsi/param.o 00:04:33.332 CC lib/iscsi/portal_grp.o 00:04:33.332 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:33.332 CC lib/iscsi/tgt_node.o 00:04:33.332 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:33.332 CC lib/iscsi/iscsi_subsystem.o 00:04:33.590 CC lib/ftl/utils/ftl_conf.o 00:04:33.590 CC lib/ftl/utils/ftl_md.o 00:04:33.590 CC lib/ftl/utils/ftl_mempool.o 00:04:33.590 CC lib/ftl/utils/ftl_bitmap.o 00:04:33.590 CC lib/ftl/utils/ftl_property.o 00:04:33.849 CC lib/vhost/vhost_blk.o 00:04:33.849 CC lib/iscsi/iscsi_rpc.o 00:04:33.849 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:33.849 CC lib/iscsi/task.o 00:04:33.849 CC lib/vhost/rte_vhost_user.o 00:04:33.849 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:34.108 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:34.108 LIB libspdk_nvmf.a 00:04:34.108 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:34.108 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:34.108 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:34.108 SO libspdk_nvmf.so.18.1 00:04:34.108 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:34.108 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:34.366 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:34.366 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:34.366 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:34.366 CC lib/ftl/base/ftl_base_dev.o 00:04:34.366 SYMLINK libspdk_nvmf.so 00:04:34.366 CC lib/ftl/base/ftl_base_bdev.o 00:04:34.366 CC lib/ftl/ftl_trace.o 00:04:34.624 LIB libspdk_iscsi.a 00:04:34.624 SO libspdk_iscsi.so.8.0 00:04:34.624 LIB libspdk_ftl.a 00:04:34.881 SYMLINK libspdk_iscsi.so 00:04:34.881 SO libspdk_ftl.so.9.0 00:04:35.139 LIB libspdk_vhost.a 00:04:35.139 SO libspdk_vhost.so.8.0 00:04:35.397 SYMLINK libspdk_vhost.so 00:04:35.397 SYMLINK libspdk_ftl.so 00:04:35.654 CC module/env_dpdk/env_dpdk_rpc.o 00:04:35.654 CC module/accel/error/accel_error.o 00:04:35.654 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:35.654 CC module/accel/ioat/accel_ioat.o 00:04:35.654 CC module/scheduler/gscheduler/gscheduler.o 00:04:35.654 CC module/sock/posix/posix.o 00:04:35.654 CC module/blob/bdev/blob_bdev.o 00:04:35.654 CC module/accel/dsa/accel_dsa.o 00:04:35.654 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:35.912 CC module/keyring/file/keyring.o 00:04:35.912 LIB libspdk_env_dpdk_rpc.a 00:04:35.912 SO libspdk_env_dpdk_rpc.so.6.0 00:04:35.912 SYMLINK libspdk_env_dpdk_rpc.so 00:04:35.912 CC module/keyring/file/keyring_rpc.o 00:04:35.912 CC module/accel/error/accel_error_rpc.o 00:04:35.912 CC module/accel/ioat/accel_ioat_rpc.o 00:04:35.912 LIB libspdk_scheduler_dynamic.a 00:04:35.912 LIB libspdk_scheduler_dpdk_governor.a 00:04:35.912 SO libspdk_scheduler_dynamic.so.4.0 00:04:35.912 LIB libspdk_scheduler_gscheduler.a 00:04:35.912 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:36.169 CC module/accel/dsa/accel_dsa_rpc.o 00:04:36.169 LIB libspdk_blob_bdev.a 00:04:36.169 LIB libspdk_keyring_file.a 00:04:36.169 SO libspdk_scheduler_gscheduler.so.4.0 00:04:36.169 SO libspdk_blob_bdev.so.11.0 00:04:36.169 SO libspdk_keyring_file.so.1.0 00:04:36.169 SYMLINK libspdk_scheduler_dynamic.so 00:04:36.169 LIB libspdk_accel_error.a 00:04:36.169 SYMLINK libspdk_scheduler_gscheduler.so 00:04:36.169 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:36.169 LIB libspdk_accel_ioat.a 00:04:36.169 CC module/keyring/linux/keyring_rpc.o 00:04:36.169 CC module/keyring/linux/keyring.o 00:04:36.169 SYMLINK libspdk_blob_bdev.so 00:04:36.169 SYMLINK libspdk_keyring_file.so 00:04:36.169 SO libspdk_accel_error.so.2.0 00:04:36.169 SO libspdk_accel_ioat.so.6.0 00:04:36.169 LIB libspdk_accel_dsa.a 00:04:36.169 SO libspdk_accel_dsa.so.5.0 00:04:36.169 SYMLINK libspdk_accel_ioat.so 00:04:36.169 SYMLINK libspdk_accel_error.so 00:04:36.427 SYMLINK libspdk_accel_dsa.so 00:04:36.428 LIB libspdk_keyring_linux.a 00:04:36.428 CC module/accel/iaa/accel_iaa.o 00:04:36.428 SO libspdk_keyring_linux.so.1.0 00:04:36.428 SYMLINK libspdk_keyring_linux.so 00:04:36.428 CC module/bdev/error/vbdev_error.o 00:04:36.428 CC module/bdev/error/vbdev_error_rpc.o 00:04:36.428 CC module/bdev/delay/vbdev_delay.o 00:04:36.428 CC module/bdev/lvol/vbdev_lvol.o 00:04:36.428 CC module/bdev/gpt/gpt.o 00:04:36.428 CC module/blobfs/bdev/blobfs_bdev.o 00:04:36.428 CC module/bdev/malloc/bdev_malloc.o 00:04:36.428 CC module/bdev/null/bdev_null.o 00:04:36.685 CC module/accel/iaa/accel_iaa_rpc.o 00:04:36.685 CC module/bdev/null/bdev_null_rpc.o 00:04:36.685 CC module/bdev/gpt/vbdev_gpt.o 00:04:36.685 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:36.685 LIB libspdk_bdev_error.a 00:04:36.685 LIB libspdk_sock_posix.a 00:04:36.685 SO libspdk_bdev_error.so.6.0 00:04:36.685 LIB libspdk_accel_iaa.a 00:04:36.685 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:36.685 SO libspdk_sock_posix.so.6.0 00:04:36.943 SYMLINK libspdk_bdev_error.so 00:04:36.943 SO libspdk_accel_iaa.so.3.0 00:04:36.943 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:36.943 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:36.943 SYMLINK libspdk_sock_posix.so 00:04:36.943 LIB libspdk_bdev_null.a 00:04:36.943 SYMLINK libspdk_accel_iaa.so 00:04:36.943 SO libspdk_bdev_null.so.6.0 00:04:36.943 LIB libspdk_bdev_malloc.a 00:04:36.943 LIB libspdk_blobfs_bdev.a 00:04:36.943 SO libspdk_bdev_malloc.so.6.0 00:04:36.943 SO libspdk_blobfs_bdev.so.6.0 00:04:37.200 SYMLINK libspdk_bdev_null.so 00:04:37.200 LIB libspdk_bdev_delay.a 00:04:37.200 LIB libspdk_bdev_gpt.a 00:04:37.200 SYMLINK libspdk_bdev_malloc.so 00:04:37.200 SO libspdk_bdev_delay.so.6.0 00:04:37.200 SYMLINK libspdk_blobfs_bdev.so 00:04:37.200 SO libspdk_bdev_gpt.so.6.0 00:04:37.200 CC module/bdev/nvme/bdev_nvme.o 00:04:37.200 CC module/bdev/passthru/vbdev_passthru.o 00:04:37.200 SYMLINK libspdk_bdev_delay.so 00:04:37.200 CC module/bdev/raid/bdev_raid.o 00:04:37.200 SYMLINK libspdk_bdev_gpt.so 00:04:37.200 CC module/bdev/split/vbdev_split.o 00:04:37.458 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:37.458 CC module/bdev/aio/bdev_aio.o 00:04:37.458 CC module/bdev/ftl/bdev_ftl.o 00:04:37.458 CC module/bdev/iscsi/bdev_iscsi.o 00:04:37.458 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:37.458 LIB libspdk_bdev_lvol.a 00:04:37.458 CC module/bdev/split/vbdev_split_rpc.o 00:04:37.716 SO libspdk_bdev_lvol.so.6.0 00:04:37.716 SYMLINK libspdk_bdev_lvol.so 00:04:37.716 LIB libspdk_bdev_split.a 00:04:37.716 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:37.716 SO libspdk_bdev_split.so.6.0 00:04:37.716 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:37.973 SYMLINK libspdk_bdev_split.so 00:04:37.973 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:37.973 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:37.973 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:37.973 CC module/bdev/aio/bdev_aio_rpc.o 00:04:37.973 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:37.973 LIB libspdk_bdev_passthru.a 00:04:37.973 SO libspdk_bdev_passthru.so.6.0 00:04:37.973 LIB libspdk_bdev_iscsi.a 00:04:37.973 SYMLINK libspdk_bdev_passthru.so 00:04:37.973 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:37.973 SO libspdk_bdev_iscsi.so.6.0 00:04:37.973 LIB libspdk_bdev_aio.a 00:04:38.231 CC module/bdev/raid/bdev_raid_rpc.o 00:04:38.231 SO libspdk_bdev_aio.so.6.0 00:04:38.231 CC module/bdev/raid/bdev_raid_sb.o 00:04:38.231 LIB libspdk_bdev_zone_block.a 00:04:38.231 SYMLINK libspdk_bdev_aio.so 00:04:38.231 SYMLINK libspdk_bdev_iscsi.so 00:04:38.231 CC module/bdev/nvme/nvme_rpc.o 00:04:38.231 CC module/bdev/nvme/bdev_mdns_client.o 00:04:38.231 LIB libspdk_bdev_virtio.a 00:04:38.231 SO libspdk_bdev_zone_block.so.6.0 00:04:38.231 SO libspdk_bdev_virtio.so.6.0 00:04:38.231 LIB libspdk_bdev_ftl.a 00:04:38.231 SYMLINK libspdk_bdev_zone_block.so 00:04:38.231 SO libspdk_bdev_ftl.so.6.0 00:04:38.490 CC module/bdev/nvme/vbdev_opal.o 00:04:38.490 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:38.490 SYMLINK libspdk_bdev_virtio.so 00:04:38.490 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:38.490 SYMLINK libspdk_bdev_ftl.so 00:04:38.490 CC module/bdev/raid/raid0.o 00:04:38.490 CC module/bdev/raid/raid1.o 00:04:38.490 CC module/bdev/raid/concat.o 00:04:39.057 LIB libspdk_bdev_raid.a 00:04:39.057 SO libspdk_bdev_raid.so.6.0 00:04:39.057 SYMLINK libspdk_bdev_raid.so 00:04:39.623 LIB libspdk_bdev_nvme.a 00:04:39.623 SO libspdk_bdev_nvme.so.7.0 00:04:39.881 SYMLINK libspdk_bdev_nvme.so 00:04:40.139 CC module/event/subsystems/scheduler/scheduler.o 00:04:40.139 CC module/event/subsystems/keyring/keyring.o 00:04:40.139 CC module/event/subsystems/sock/sock.o 00:04:40.139 CC module/event/subsystems/iobuf/iobuf.o 00:04:40.139 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:40.139 CC module/event/subsystems/vmd/vmd.o 00:04:40.139 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:40.139 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:40.397 LIB libspdk_event_keyring.a 00:04:40.397 LIB libspdk_event_scheduler.a 00:04:40.397 SO libspdk_event_keyring.so.1.0 00:04:40.397 LIB libspdk_event_sock.a 00:04:40.397 SO libspdk_event_scheduler.so.4.0 00:04:40.397 SO libspdk_event_sock.so.5.0 00:04:40.397 SYMLINK libspdk_event_keyring.so 00:04:40.397 LIB libspdk_event_vhost_blk.a 00:04:40.397 LIB libspdk_event_vmd.a 00:04:40.397 SYMLINK libspdk_event_scheduler.so 00:04:40.397 LIB libspdk_event_iobuf.a 00:04:40.397 SO libspdk_event_vhost_blk.so.3.0 00:04:40.654 SYMLINK libspdk_event_sock.so 00:04:40.654 SO libspdk_event_vmd.so.6.0 00:04:40.655 SO libspdk_event_iobuf.so.3.0 00:04:40.655 SYMLINK libspdk_event_vhost_blk.so 00:04:40.655 SYMLINK libspdk_event_vmd.so 00:04:40.655 SYMLINK libspdk_event_iobuf.so 00:04:40.912 CC module/event/subsystems/accel/accel.o 00:04:41.168 LIB libspdk_event_accel.a 00:04:41.169 SO libspdk_event_accel.so.6.0 00:04:41.169 SYMLINK libspdk_event_accel.so 00:04:41.426 CC module/event/subsystems/bdev/bdev.o 00:04:41.684 LIB libspdk_event_bdev.a 00:04:41.684 SO libspdk_event_bdev.so.6.0 00:04:41.684 SYMLINK libspdk_event_bdev.so 00:04:41.941 CC module/event/subsystems/ublk/ublk.o 00:04:41.942 CC module/event/subsystems/scsi/scsi.o 00:04:41.942 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:41.942 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:41.942 CC module/event/subsystems/nbd/nbd.o 00:04:42.199 LIB libspdk_event_nbd.a 00:04:42.199 LIB libspdk_event_ublk.a 00:04:42.199 LIB libspdk_event_scsi.a 00:04:42.199 SO libspdk_event_ublk.so.3.0 00:04:42.199 SO libspdk_event_nbd.so.6.0 00:04:42.199 SO libspdk_event_scsi.so.6.0 00:04:42.199 SYMLINK libspdk_event_nbd.so 00:04:42.199 LIB libspdk_event_nvmf.a 00:04:42.199 SYMLINK libspdk_event_ublk.so 00:04:42.199 SYMLINK libspdk_event_scsi.so 00:04:42.199 SO libspdk_event_nvmf.so.6.0 00:04:42.199 SYMLINK libspdk_event_nvmf.so 00:04:42.457 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:42.457 CC module/event/subsystems/iscsi/iscsi.o 00:04:42.714 LIB libspdk_event_vhost_scsi.a 00:04:42.714 LIB libspdk_event_iscsi.a 00:04:42.714 SO libspdk_event_vhost_scsi.so.3.0 00:04:42.714 SO libspdk_event_iscsi.so.6.0 00:04:42.714 SYMLINK libspdk_event_vhost_scsi.so 00:04:42.714 SYMLINK libspdk_event_iscsi.so 00:04:42.973 SO libspdk.so.6.0 00:04:42.973 SYMLINK libspdk.so 00:04:42.973 CXX app/trace/trace.o 00:04:42.973 CC app/trace_record/trace_record.o 00:04:43.230 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:43.230 CC app/nvmf_tgt/nvmf_main.o 00:04:43.230 CC app/iscsi_tgt/iscsi_tgt.o 00:04:43.230 CC examples/util/zipf/zipf.o 00:04:43.230 CC examples/ioat/perf/perf.o 00:04:43.230 CC test/thread/poller_perf/poller_perf.o 00:04:43.230 CC test/dma/test_dma/test_dma.o 00:04:43.230 LINK nvmf_tgt 00:04:43.230 LINK interrupt_tgt 00:04:43.230 LINK zipf 00:04:43.230 LINK spdk_trace_record 00:04:43.488 LINK poller_perf 00:04:43.488 LINK iscsi_tgt 00:04:43.488 LINK ioat_perf 00:04:43.488 LINK spdk_trace 00:04:43.746 LINK test_dma 00:04:43.746 CC examples/ioat/verify/verify.o 00:04:43.746 CC examples/thread/thread/thread_ex.o 00:04:43.746 CC test/app/bdev_svc/bdev_svc.o 00:04:43.746 CC examples/vmd/lsvmd/lsvmd.o 00:04:43.747 CC examples/idxd/perf/perf.o 00:04:43.747 CC examples/sock/hello_world/hello_sock.o 00:04:43.747 CC examples/vmd/led/led.o 00:04:44.004 CC app/spdk_tgt/spdk_tgt.o 00:04:44.004 LINK lsvmd 00:04:44.004 LINK bdev_svc 00:04:44.004 LINK verify 00:04:44.004 LINK led 00:04:44.004 TEST_HEADER include/spdk/accel.h 00:04:44.004 TEST_HEADER include/spdk/accel_module.h 00:04:44.004 TEST_HEADER include/spdk/assert.h 00:04:44.004 TEST_HEADER include/spdk/barrier.h 00:04:44.004 TEST_HEADER include/spdk/base64.h 00:04:44.004 TEST_HEADER include/spdk/bdev.h 00:04:44.004 TEST_HEADER include/spdk/bdev_module.h 00:04:44.004 TEST_HEADER include/spdk/bdev_zone.h 00:04:44.004 TEST_HEADER include/spdk/bit_array.h 00:04:44.004 TEST_HEADER include/spdk/bit_pool.h 00:04:44.004 TEST_HEADER include/spdk/blob_bdev.h 00:04:44.004 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:44.004 TEST_HEADER include/spdk/blobfs.h 00:04:44.004 TEST_HEADER include/spdk/blob.h 00:04:44.004 TEST_HEADER include/spdk/conf.h 00:04:44.004 TEST_HEADER include/spdk/config.h 00:04:44.004 TEST_HEADER include/spdk/cpuset.h 00:04:44.004 TEST_HEADER include/spdk/crc16.h 00:04:44.004 TEST_HEADER include/spdk/crc32.h 00:04:44.004 TEST_HEADER include/spdk/crc64.h 00:04:44.004 LINK hello_sock 00:04:44.004 TEST_HEADER include/spdk/dif.h 00:04:44.004 LINK thread 00:04:44.004 TEST_HEADER include/spdk/dma.h 00:04:44.004 TEST_HEADER include/spdk/endian.h 00:04:44.004 TEST_HEADER include/spdk/env_dpdk.h 00:04:44.004 TEST_HEADER include/spdk/env.h 00:04:44.004 TEST_HEADER include/spdk/event.h 00:04:44.004 TEST_HEADER include/spdk/fd_group.h 00:04:44.004 TEST_HEADER include/spdk/fd.h 00:04:44.004 TEST_HEADER include/spdk/file.h 00:04:44.004 TEST_HEADER include/spdk/ftl.h 00:04:44.004 TEST_HEADER include/spdk/gpt_spec.h 00:04:44.004 TEST_HEADER include/spdk/hexlify.h 00:04:44.004 TEST_HEADER include/spdk/histogram_data.h 00:04:44.004 TEST_HEADER include/spdk/idxd.h 00:04:44.004 TEST_HEADER include/spdk/idxd_spec.h 00:04:44.004 TEST_HEADER include/spdk/init.h 00:04:44.004 TEST_HEADER include/spdk/ioat.h 00:04:44.004 TEST_HEADER include/spdk/ioat_spec.h 00:04:44.004 TEST_HEADER include/spdk/iscsi_spec.h 00:04:44.004 TEST_HEADER include/spdk/json.h 00:04:44.004 TEST_HEADER include/spdk/jsonrpc.h 00:04:44.004 TEST_HEADER include/spdk/keyring.h 00:04:44.004 TEST_HEADER include/spdk/keyring_module.h 00:04:44.004 TEST_HEADER include/spdk/likely.h 00:04:44.004 TEST_HEADER include/spdk/log.h 00:04:44.004 TEST_HEADER include/spdk/lvol.h 00:04:44.004 TEST_HEADER include/spdk/memory.h 00:04:44.004 LINK idxd_perf 00:04:44.004 TEST_HEADER include/spdk/mmio.h 00:04:44.004 TEST_HEADER include/spdk/nbd.h 00:04:44.004 TEST_HEADER include/spdk/notify.h 00:04:44.004 TEST_HEADER include/spdk/nvme.h 00:04:44.004 TEST_HEADER include/spdk/nvme_intel.h 00:04:44.262 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:44.262 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:44.262 TEST_HEADER include/spdk/nvme_spec.h 00:04:44.262 LINK spdk_tgt 00:04:44.262 TEST_HEADER include/spdk/nvme_zns.h 00:04:44.262 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:44.262 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:44.262 TEST_HEADER include/spdk/nvmf.h 00:04:44.262 TEST_HEADER include/spdk/nvmf_spec.h 00:04:44.262 TEST_HEADER include/spdk/nvmf_transport.h 00:04:44.262 TEST_HEADER include/spdk/opal.h 00:04:44.262 TEST_HEADER include/spdk/opal_spec.h 00:04:44.262 TEST_HEADER include/spdk/pci_ids.h 00:04:44.262 TEST_HEADER include/spdk/pipe.h 00:04:44.262 TEST_HEADER include/spdk/queue.h 00:04:44.262 TEST_HEADER include/spdk/reduce.h 00:04:44.262 TEST_HEADER include/spdk/rpc.h 00:04:44.262 TEST_HEADER include/spdk/scheduler.h 00:04:44.262 TEST_HEADER include/spdk/scsi.h 00:04:44.262 TEST_HEADER include/spdk/scsi_spec.h 00:04:44.263 TEST_HEADER include/spdk/sock.h 00:04:44.263 TEST_HEADER include/spdk/stdinc.h 00:04:44.263 TEST_HEADER include/spdk/string.h 00:04:44.263 TEST_HEADER include/spdk/thread.h 00:04:44.263 TEST_HEADER include/spdk/trace.h 00:04:44.263 TEST_HEADER include/spdk/trace_parser.h 00:04:44.263 TEST_HEADER include/spdk/tree.h 00:04:44.263 TEST_HEADER include/spdk/ublk.h 00:04:44.263 TEST_HEADER include/spdk/util.h 00:04:44.263 TEST_HEADER include/spdk/uuid.h 00:04:44.263 TEST_HEADER include/spdk/version.h 00:04:44.263 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:44.263 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:44.263 TEST_HEADER include/spdk/vhost.h 00:04:44.263 TEST_HEADER include/spdk/vmd.h 00:04:44.263 TEST_HEADER include/spdk/xor.h 00:04:44.263 TEST_HEADER include/spdk/zipf.h 00:04:44.263 CXX test/cpp_headers/accel.o 00:04:44.263 CC app/spdk_lspci/spdk_lspci.o 00:04:44.263 CC test/app/histogram_perf/histogram_perf.o 00:04:44.263 CC test/env/vtophys/vtophys.o 00:04:44.263 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:44.520 CC test/env/mem_callbacks/mem_callbacks.o 00:04:44.520 CXX test/cpp_headers/accel_module.o 00:04:44.521 CC test/env/memory/memory_ut.o 00:04:44.521 LINK spdk_lspci 00:04:44.521 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:44.521 LINK histogram_perf 00:04:44.521 CC app/spdk_nvme_perf/perf.o 00:04:44.521 LINK vtophys 00:04:44.778 LINK env_dpdk_post_init 00:04:44.779 CXX test/cpp_headers/assert.o 00:04:44.779 CXX test/cpp_headers/barrier.o 00:04:44.779 CXX test/cpp_headers/base64.o 00:04:44.779 CXX test/cpp_headers/bdev.o 00:04:45.037 CXX test/cpp_headers/bdev_module.o 00:04:45.037 LINK nvme_fuzz 00:04:45.296 CXX test/cpp_headers/bdev_zone.o 00:04:45.296 LINK mem_callbacks 00:04:45.296 CC app/spdk_nvme_discover/discovery_aer.o 00:04:45.296 CC app/spdk_nvme_identify/identify.o 00:04:45.296 CC examples/nvme/hello_world/hello_world.o 00:04:45.296 CC examples/accel/perf/accel_perf.o 00:04:45.296 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:45.296 CXX test/cpp_headers/bit_array.o 00:04:45.555 CXX test/cpp_headers/bit_pool.o 00:04:45.555 LINK spdk_nvme_perf 00:04:45.555 LINK spdk_nvme_discover 00:04:45.812 LINK hello_world 00:04:45.812 CXX test/cpp_headers/blob_bdev.o 00:04:45.812 CC test/env/pci/pci_ut.o 00:04:45.812 CC test/app/jsoncat/jsoncat.o 00:04:45.812 LINK memory_ut 00:04:45.812 CXX test/cpp_headers/blobfs_bdev.o 00:04:45.812 LINK accel_perf 00:04:46.070 LINK jsoncat 00:04:46.070 CC examples/nvme/reconnect/reconnect.o 00:04:46.070 CXX test/cpp_headers/blobfs.o 00:04:46.329 LINK spdk_nvme_identify 00:04:46.329 CC test/event/event_perf/event_perf.o 00:04:46.329 CC test/event/reactor/reactor.o 00:04:46.329 LINK pci_ut 00:04:46.588 CXX test/cpp_headers/blob.o 00:04:46.588 CC test/nvme/aer/aer.o 00:04:46.588 CC test/event/reactor_perf/reactor_perf.o 00:04:46.588 LINK event_perf 00:04:46.588 LINK reconnect 00:04:46.588 LINK reactor 00:04:46.847 CC app/spdk_top/spdk_top.o 00:04:46.847 CXX test/cpp_headers/conf.o 00:04:46.847 LINK reactor_perf 00:04:46.847 CXX test/cpp_headers/config.o 00:04:46.847 LINK aer 00:04:47.105 CC test/nvme/reset/reset.o 00:04:47.105 CXX test/cpp_headers/cpuset.o 00:04:47.105 CC test/nvme/sgl/sgl.o 00:04:47.105 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:47.363 CXX test/cpp_headers/crc16.o 00:04:47.363 CC app/vhost/vhost.o 00:04:47.363 LINK reset 00:04:47.363 CC test/event/app_repeat/app_repeat.o 00:04:47.363 LINK sgl 00:04:47.363 CXX test/cpp_headers/crc32.o 00:04:47.363 CC app/spdk_dd/spdk_dd.o 00:04:47.635 LINK app_repeat 00:04:47.635 LINK vhost 00:04:47.635 CXX test/cpp_headers/crc64.o 00:04:47.917 LINK nvme_manage 00:04:47.917 LINK spdk_top 00:04:47.917 CC test/nvme/e2edp/nvme_dp.o 00:04:47.917 CC app/fio/nvme/fio_plugin.o 00:04:48.176 CXX test/cpp_headers/dif.o 00:04:48.176 LINK iscsi_fuzz 00:04:48.176 CXX test/cpp_headers/dma.o 00:04:48.176 CXX test/cpp_headers/endian.o 00:04:48.176 CC test/event/scheduler/scheduler.o 00:04:48.434 LINK spdk_dd 00:04:48.434 CC examples/nvme/arbitration/arbitration.o 00:04:48.692 CC test/rpc_client/rpc_client_test.o 00:04:48.692 LINK nvme_dp 00:04:48.692 CXX test/cpp_headers/env_dpdk.o 00:04:48.692 LINK scheduler 00:04:48.692 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:48.950 LINK rpc_client_test 00:04:48.950 CXX test/cpp_headers/env.o 00:04:48.950 CXX test/cpp_headers/event.o 00:04:48.950 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:48.950 CC test/accel/dif/dif.o 00:04:48.950 CC test/nvme/overhead/overhead.o 00:04:48.950 LINK spdk_nvme 00:04:49.207 LINK arbitration 00:04:49.207 CXX test/cpp_headers/fd_group.o 00:04:49.207 CC app/fio/bdev/fio_plugin.o 00:04:49.207 CC test/app/stub/stub.o 00:04:49.465 CC test/nvme/err_injection/err_injection.o 00:04:49.465 LINK overhead 00:04:49.465 LINK vhost_fuzz 00:04:49.465 CXX test/cpp_headers/fd.o 00:04:49.465 CC examples/nvme/hotplug/hotplug.o 00:04:49.465 CC test/blobfs/mkfs/mkfs.o 00:04:49.465 LINK stub 00:04:49.465 LINK dif 00:04:49.724 LINK err_injection 00:04:49.724 CXX test/cpp_headers/file.o 00:04:49.724 CC test/nvme/startup/startup.o 00:04:49.724 CC test/nvme/reserve/reserve.o 00:04:49.724 LINK mkfs 00:04:49.981 LINK spdk_bdev 00:04:49.981 LINK hotplug 00:04:49.981 CXX test/cpp_headers/ftl.o 00:04:49.981 LINK reserve 00:04:49.981 CC test/nvme/simple_copy/simple_copy.o 00:04:49.981 CC test/nvme/connect_stress/connect_stress.o 00:04:49.981 LINK startup 00:04:50.238 CC test/nvme/boot_partition/boot_partition.o 00:04:50.238 CXX test/cpp_headers/gpt_spec.o 00:04:50.238 CXX test/cpp_headers/hexlify.o 00:04:50.238 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:50.238 CC examples/nvme/abort/abort.o 00:04:50.238 CC test/lvol/esnap/esnap.o 00:04:50.238 LINK simple_copy 00:04:50.238 CC test/nvme/compliance/nvme_compliance.o 00:04:50.238 LINK boot_partition 00:04:50.238 LINK connect_stress 00:04:50.238 CXX test/cpp_headers/histogram_data.o 00:04:50.496 CC test/nvme/fused_ordering/fused_ordering.o 00:04:50.496 LINK cmb_copy 00:04:50.496 CXX test/cpp_headers/idxd.o 00:04:50.496 LINK abort 00:04:50.496 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:50.754 CC test/nvme/fdp/fdp.o 00:04:50.754 CXX test/cpp_headers/idxd_spec.o 00:04:50.754 LINK fused_ordering 00:04:50.754 LINK nvme_compliance 00:04:50.754 CC test/nvme/cuse/cuse.o 00:04:50.754 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:50.754 LINK doorbell_aers 00:04:50.754 CXX test/cpp_headers/init.o 00:04:50.754 CXX test/cpp_headers/ioat.o 00:04:50.754 CXX test/cpp_headers/ioat_spec.o 00:04:51.011 LINK pmr_persistence 00:04:51.011 LINK fdp 00:04:51.011 CXX test/cpp_headers/iscsi_spec.o 00:04:51.011 CC test/bdev/bdevio/bdevio.o 00:04:51.011 CXX test/cpp_headers/json.o 00:04:51.011 CXX test/cpp_headers/jsonrpc.o 00:04:51.268 CC examples/blob/hello_world/hello_blob.o 00:04:51.268 CXX test/cpp_headers/keyring.o 00:04:51.525 CXX test/cpp_headers/keyring_module.o 00:04:51.525 CC examples/blob/cli/blobcli.o 00:04:51.525 CC examples/bdev/hello_world/hello_bdev.o 00:04:51.525 LINK bdevio 00:04:51.525 CC examples/bdev/bdevperf/bdevperf.o 00:04:51.817 CXX test/cpp_headers/likely.o 00:04:51.817 CXX test/cpp_headers/log.o 00:04:51.817 LINK hello_blob 00:04:51.817 LINK hello_bdev 00:04:52.087 CXX test/cpp_headers/lvol.o 00:04:52.087 CXX test/cpp_headers/memory.o 00:04:52.087 CXX test/cpp_headers/mmio.o 00:04:52.087 CXX test/cpp_headers/nbd.o 00:04:52.087 CXX test/cpp_headers/notify.o 00:04:52.087 CXX test/cpp_headers/nvme.o 00:04:52.087 LINK blobcli 00:04:52.087 CXX test/cpp_headers/nvme_intel.o 00:04:52.345 CXX test/cpp_headers/nvme_ocssd.o 00:04:52.345 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:52.345 CXX test/cpp_headers/nvme_spec.o 00:04:52.345 CXX test/cpp_headers/nvme_zns.o 00:04:52.345 CXX test/cpp_headers/nvmf_cmd.o 00:04:52.345 LINK cuse 00:04:52.345 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:52.345 CXX test/cpp_headers/nvmf.o 00:04:52.345 CXX test/cpp_headers/nvmf_spec.o 00:04:52.602 CXX test/cpp_headers/nvmf_transport.o 00:04:52.602 CXX test/cpp_headers/opal.o 00:04:52.602 CXX test/cpp_headers/opal_spec.o 00:04:52.602 CXX test/cpp_headers/pci_ids.o 00:04:52.602 CXX test/cpp_headers/pipe.o 00:04:52.602 CXX test/cpp_headers/queue.o 00:04:52.602 CXX test/cpp_headers/reduce.o 00:04:52.602 CXX test/cpp_headers/rpc.o 00:04:52.859 CXX test/cpp_headers/scheduler.o 00:04:52.859 CXX test/cpp_headers/scsi.o 00:04:52.859 CXX test/cpp_headers/scsi_spec.o 00:04:52.859 CXX test/cpp_headers/sock.o 00:04:52.859 CXX test/cpp_headers/stdinc.o 00:04:52.859 CXX test/cpp_headers/string.o 00:04:52.859 CXX test/cpp_headers/thread.o 00:04:52.859 CXX test/cpp_headers/trace.o 00:04:52.859 CXX test/cpp_headers/trace_parser.o 00:04:52.859 LINK bdevperf 00:04:52.859 CXX test/cpp_headers/tree.o 00:04:52.859 CXX test/cpp_headers/ublk.o 00:04:52.859 CXX test/cpp_headers/util.o 00:04:53.117 CXX test/cpp_headers/uuid.o 00:04:53.117 CXX test/cpp_headers/version.o 00:04:53.117 CXX test/cpp_headers/vfio_user_pci.o 00:04:53.117 CXX test/cpp_headers/vfio_user_spec.o 00:04:53.117 CXX test/cpp_headers/vhost.o 00:04:53.117 CXX test/cpp_headers/vmd.o 00:04:53.117 CXX test/cpp_headers/xor.o 00:04:53.117 CXX test/cpp_headers/zipf.o 00:04:53.682 CC examples/nvmf/nvmf/nvmf.o 00:04:53.939 LINK nvmf 00:04:56.467 LINK esnap 00:04:56.726 00:04:56.726 real 1m12.500s 00:04:56.726 user 7m13.552s 00:04:56.726 sys 1m20.451s 00:04:56.726 14:23:08 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:56.726 14:23:08 make -- common/autotest_common.sh@10 -- $ set +x 00:04:56.726 ************************************ 00:04:56.726 END TEST make 00:04:56.726 ************************************ 00:04:56.726 14:23:08 -- common/autotest_common.sh@1142 -- $ return 0 00:04:56.726 14:23:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:56.726 14:23:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:56.726 14:23:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:56.726 14:23:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.726 14:23:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:56.726 14:23:08 -- pm/common@44 -- $ pid=5938 00:04:56.726 14:23:08 -- pm/common@50 -- $ kill -TERM 5938 00:04:56.726 14:23:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.726 14:23:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:56.726 14:23:08 -- pm/common@44 -- $ pid=5940 00:04:56.726 14:23:08 -- pm/common@50 -- $ kill -TERM 5940 00:04:56.984 14:23:09 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.984 14:23:09 -- nvmf/common.sh@7 -- # uname -s 00:04:56.984 14:23:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.984 14:23:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.984 14:23:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.984 14:23:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.984 14:23:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.984 14:23:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.984 14:23:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.984 14:23:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.984 14:23:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.984 14:23:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.984 14:23:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:04:56.984 14:23:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:04:56.984 14:23:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.984 14:23:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.984 14:23:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:56.984 14:23:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.984 14:23:09 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.984 14:23:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.984 14:23:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.984 14:23:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.984 14:23:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.984 14:23:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.984 14:23:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.984 14:23:09 -- paths/export.sh@5 -- # export PATH 00:04:56.984 14:23:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.984 14:23:09 -- nvmf/common.sh@47 -- # : 0 00:04:56.984 14:23:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:56.984 14:23:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:56.984 14:23:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.984 14:23:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.984 14:23:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.984 14:23:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:56.984 14:23:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:56.984 14:23:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:56.984 14:23:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:56.984 14:23:09 -- spdk/autotest.sh@32 -- # uname -s 00:04:56.984 14:23:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:56.984 14:23:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:56.984 14:23:09 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:56.984 14:23:09 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:56.984 14:23:09 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:56.984 14:23:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:56.984 14:23:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:56.984 14:23:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:56.984 14:23:09 -- spdk/autotest.sh@48 -- # udevadm_pid=68505 00:04:56.984 14:23:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:56.984 14:23:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:56.984 14:23:09 -- pm/common@17 -- # local monitor 00:04:56.984 14:23:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.984 14:23:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.984 14:23:09 -- pm/common@25 -- # sleep 1 00:04:56.984 14:23:09 -- pm/common@21 -- # date +%s 00:04:56.984 14:23:09 -- pm/common@21 -- # date +%s 00:04:56.984 14:23:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720621389 00:04:56.984 14:23:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720621389 00:04:56.984 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720621389_collect-cpu-load.pm.log 00:04:56.984 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720621389_collect-vmstat.pm.log 00:04:57.918 14:23:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:57.918 14:23:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:57.918 14:23:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.918 14:23:10 -- common/autotest_common.sh@10 -- # set +x 00:04:57.918 14:23:10 -- spdk/autotest.sh@59 -- # create_test_list 00:04:57.918 14:23:10 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:57.918 14:23:10 -- common/autotest_common.sh@10 -- # set +x 00:04:58.175 14:23:10 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:58.175 14:23:10 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:58.175 14:23:10 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:58.175 14:23:10 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:58.175 14:23:10 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:58.175 14:23:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:58.175 14:23:10 -- common/autotest_common.sh@1455 -- # uname 00:04:58.175 14:23:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:58.175 14:23:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:58.175 14:23:10 -- common/autotest_common.sh@1475 -- # uname 00:04:58.175 14:23:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:58.175 14:23:10 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:58.175 14:23:10 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:58.175 14:23:10 -- spdk/autotest.sh@72 -- # hash lcov 00:04:58.175 14:23:10 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:58.175 14:23:10 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:58.175 --rc lcov_branch_coverage=1 00:04:58.175 --rc lcov_function_coverage=1 00:04:58.175 --rc genhtml_branch_coverage=1 00:04:58.175 --rc genhtml_function_coverage=1 00:04:58.175 --rc genhtml_legend=1 00:04:58.175 --rc geninfo_all_blocks=1 00:04:58.175 ' 00:04:58.175 14:23:10 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:58.175 --rc lcov_branch_coverage=1 00:04:58.175 --rc lcov_function_coverage=1 00:04:58.175 --rc genhtml_branch_coverage=1 00:04:58.175 --rc genhtml_function_coverage=1 00:04:58.175 --rc genhtml_legend=1 00:04:58.175 --rc geninfo_all_blocks=1 00:04:58.175 ' 00:04:58.175 14:23:10 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:58.175 --rc lcov_branch_coverage=1 00:04:58.175 --rc lcov_function_coverage=1 00:04:58.175 --rc genhtml_branch_coverage=1 00:04:58.175 --rc genhtml_function_coverage=1 00:04:58.175 --rc genhtml_legend=1 00:04:58.175 --rc geninfo_all_blocks=1 00:04:58.175 --no-external' 00:04:58.175 14:23:10 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:58.175 --rc lcov_branch_coverage=1 00:04:58.175 --rc lcov_function_coverage=1 00:04:58.175 --rc genhtml_branch_coverage=1 00:04:58.175 --rc genhtml_function_coverage=1 00:04:58.175 --rc genhtml_legend=1 00:04:58.175 --rc geninfo_all_blocks=1 00:04:58.175 --no-external' 00:04:58.175 14:23:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:58.175 lcov: LCOV version 1.14 00:04:58.175 14:23:10 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:16.276 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:16.276 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:28.477 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:28.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:28.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:28.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:28.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:28.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:32.918 14:23:44 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:32.918 14:23:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.918 14:23:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.918 14:23:44 -- spdk/autotest.sh@91 -- # rm -f 00:05:32.918 14:23:44 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:33.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.483 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:33.483 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:33.483 14:23:45 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:33.483 14:23:45 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:33.483 14:23:45 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:33.483 14:23:45 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:33.483 14:23:45 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:33.483 14:23:45 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:33.483 14:23:45 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:33.483 14:23:45 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:33.483 14:23:45 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:33.483 14:23:45 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:33.483 14:23:45 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:33.483 14:23:45 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:33.483 14:23:45 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:33.483 14:23:45 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:33.483 14:23:45 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:33.483 14:23:45 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:33.483 14:23:45 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:33.483 14:23:45 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:33.483 14:23:45 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:33.483 14:23:45 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:33.483 14:23:45 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:33.483 14:23:45 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:33.483 14:23:45 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:33.483 14:23:45 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:33.483 14:23:45 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:33.483 14:23:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.483 14:23:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:33.483 14:23:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:33.483 14:23:45 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:33.483 14:23:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:33.483 No valid GPT data, bailing 00:05:33.483 14:23:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:33.483 14:23:45 -- scripts/common.sh@391 -- # pt= 00:05:33.483 14:23:45 -- scripts/common.sh@392 -- # return 1 00:05:33.483 14:23:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:33.483 1+0 records in 00:05:33.483 1+0 records out 00:05:33.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00321564 s, 326 MB/s 00:05:33.483 14:23:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.483 14:23:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:33.483 14:23:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:33.483 14:23:45 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:33.483 14:23:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:33.483 No valid GPT data, bailing 00:05:33.483 14:23:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:33.483 14:23:45 -- scripts/common.sh@391 -- # pt= 00:05:33.483 14:23:45 -- scripts/common.sh@392 -- # return 1 00:05:33.483 14:23:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:33.483 1+0 records in 00:05:33.483 1+0 records out 00:05:33.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00324682 s, 323 MB/s 00:05:33.483 14:23:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.483 14:23:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:33.483 14:23:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:33.483 14:23:45 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:33.483 14:23:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:33.741 No valid GPT data, bailing 00:05:33.741 14:23:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:33.741 14:23:45 -- scripts/common.sh@391 -- # pt= 00:05:33.741 14:23:45 -- scripts/common.sh@392 -- # return 1 00:05:33.741 14:23:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:33.741 1+0 records in 00:05:33.741 1+0 records out 00:05:33.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00363596 s, 288 MB/s 00:05:33.741 14:23:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.741 14:23:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:33.741 14:23:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:33.741 14:23:45 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:33.741 14:23:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:33.741 No valid GPT data, bailing 00:05:33.741 14:23:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:33.741 14:23:45 -- scripts/common.sh@391 -- # pt= 00:05:33.741 14:23:45 -- scripts/common.sh@392 -- # return 1 00:05:33.741 14:23:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:33.741 1+0 records in 00:05:33.741 1+0 records out 00:05:33.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00397007 s, 264 MB/s 00:05:33.741 14:23:45 -- spdk/autotest.sh@118 -- # sync 00:05:33.741 14:23:45 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:33.741 14:23:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:33.741 14:23:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:35.641 14:23:47 -- spdk/autotest.sh@124 -- # uname -s 00:05:35.641 14:23:47 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:35.641 14:23:47 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:35.641 14:23:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.641 14:23:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.641 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:05:35.641 ************************************ 00:05:35.641 START TEST setup.sh 00:05:35.641 ************************************ 00:05:35.641 14:23:47 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:35.641 * Looking for test storage... 00:05:35.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:35.642 14:23:47 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:35.642 14:23:47 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:35.642 14:23:47 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:35.642 14:23:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.642 14:23:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.642 14:23:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:35.642 ************************************ 00:05:35.642 START TEST acl 00:05:35.642 ************************************ 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:35.642 * Looking for test storage... 00:05:35.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:35.642 14:23:47 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:35.642 14:23:47 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:35.642 14:23:47 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:35.642 14:23:47 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:35.642 14:23:47 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:35.642 14:23:47 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:35.642 14:23:47 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:35.642 14:23:47 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:35.642 14:23:47 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.208 14:23:48 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:36.208 14:23:48 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:36.208 14:23:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.208 14:23:48 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:36.208 14:23:48 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.208 14:23:48 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:36.775 14:23:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:36.775 14:23:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:36.775 14:23:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.775 Hugepages 00:05:36.775 node hugesize free / total 00:05:36.775 14:23:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:36.775 14:23:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:36.775 14:23:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.775 00:05:36.775 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.775 14:23:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:36.775 14:23:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:36.775 14:23:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.775 14:23:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:36.775 14:23:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:36.775 14:23:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.775 14:23:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:37.033 14:23:49 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:37.033 14:23:49 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.033 14:23:49 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.033 14:23:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:37.033 ************************************ 00:05:37.033 START TEST denied 00:05:37.033 ************************************ 00:05:37.033 14:23:49 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:37.033 14:23:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:37.033 14:23:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:37.033 14:23:49 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:37.033 14:23:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.033 14:23:49 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:37.966 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:37.966 14:23:49 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:37.966 14:23:49 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:37.966 14:23:49 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:37.966 14:23:49 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:37.966 14:23:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:37.966 14:23:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:37.966 14:23:49 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:37.966 14:23:49 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:37.966 14:23:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:37.966 14:23:49 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.225 00:05:38.225 real 0m1.264s 00:05:38.225 user 0m0.510s 00:05:38.225 sys 0m0.709s 00:05:38.225 14:23:50 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.225 14:23:50 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:38.225 ************************************ 00:05:38.225 END TEST denied 00:05:38.225 ************************************ 00:05:38.225 14:23:50 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:38.225 14:23:50 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:38.225 14:23:50 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.225 14:23:50 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.225 14:23:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:38.225 ************************************ 00:05:38.225 START TEST allowed 00:05:38.225 ************************************ 00:05:38.225 14:23:50 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:38.225 14:23:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:38.225 14:23:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:38.225 14:23:50 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.225 14:23:50 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:38.225 14:23:50 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:39.160 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.160 14:23:51 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:39.160 14:23:51 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:39.160 14:23:51 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:39.160 14:23:51 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:39.160 14:23:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:39.160 14:23:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:39.160 14:23:51 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:39.160 14:23:51 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:39.160 14:23:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:39.160 14:23:51 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.728 00:05:39.728 real 0m1.357s 00:05:39.728 user 0m0.608s 00:05:39.728 sys 0m0.740s 00:05:39.728 14:23:51 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.728 14:23:51 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:39.728 ************************************ 00:05:39.728 END TEST allowed 00:05:39.728 ************************************ 00:05:39.728 14:23:51 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:39.728 ************************************ 00:05:39.728 END TEST acl 00:05:39.728 ************************************ 00:05:39.728 00:05:39.728 real 0m4.217s 00:05:39.728 user 0m1.899s 00:05:39.728 sys 0m2.277s 00:05:39.728 14:23:51 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.728 14:23:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:39.728 14:23:51 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:39.728 14:23:51 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:39.728 14:23:51 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.728 14:23:51 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.728 14:23:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:39.728 ************************************ 00:05:39.728 START TEST hugepages 00:05:39.728 ************************************ 00:05:39.728 14:23:51 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:39.728 * Looking for test storage... 00:05:39.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 4444360 kB' 'MemAvailable: 7367436 kB' 'Buffers: 2436 kB' 'Cached: 3123824 kB' 'SwapCached: 0 kB' 'Active: 477716 kB' 'Inactive: 2753544 kB' 'Active(anon): 115492 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 106636 kB' 'Mapped: 48724 kB' 'Shmem: 10492 kB' 'KReclaimable: 88496 kB' 'Slab: 168928 kB' 'SReclaimable: 88496 kB' 'SUnreclaim: 80432 kB' 'KernelStack: 6732 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 339072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.728 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:39.729 14:23:51 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:39.729 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:39.987 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:39.987 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:39.987 14:23:52 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:39.987 14:23:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.987 14:23:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.987 14:23:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:39.987 ************************************ 00:05:39.987 START TEST default_setup 00:05:39.987 ************************************ 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.987 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.553 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.553 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.823 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6551836 kB' 'MemAvailable: 9474748 kB' 'Buffers: 2436 kB' 'Cached: 3123816 kB' 'SwapCached: 0 kB' 'Active: 494664 kB' 'Inactive: 2753560 kB' 'Active(anon): 132440 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 123508 kB' 'Mapped: 48836 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168492 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80360 kB' 'KernelStack: 6656 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.824 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6552552 kB' 'MemAvailable: 9475464 kB' 'Buffers: 2436 kB' 'Cached: 3123816 kB' 'SwapCached: 0 kB' 'Active: 494696 kB' 'Inactive: 2753560 kB' 'Active(anon): 132472 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 123580 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168492 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80360 kB' 'KernelStack: 6672 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.825 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6552552 kB' 'MemAvailable: 9475464 kB' 'Buffers: 2436 kB' 'Cached: 3123816 kB' 'SwapCached: 0 kB' 'Active: 494524 kB' 'Inactive: 2753560 kB' 'Active(anon): 132300 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 123452 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168488 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80356 kB' 'KernelStack: 6688 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.826 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.827 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:40.828 nr_hugepages=1024 00:05:40.828 resv_hugepages=0 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.828 surplus_hugepages=0 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.828 anon_hugepages=0 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6552052 kB' 'MemAvailable: 9474964 kB' 'Buffers: 2436 kB' 'Cached: 3123816 kB' 'SwapCached: 0 kB' 'Active: 494568 kB' 'Inactive: 2753560 kB' 'Active(anon): 132344 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 123548 kB' 'Mapped: 48788 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168464 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80332 kB' 'KernelStack: 6708 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.828 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:40.829 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.830 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:40.830 14:23:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6552052 kB' 'MemUsed: 5689928 kB' 'SwapCached: 0 kB' 'Active: 494392 kB' 'Inactive: 2753560 kB' 'Active(anon): 132168 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 3126252 kB' 'Mapped: 48728 kB' 'AnonPages: 123320 kB' 'Shmem: 10468 kB' 'KernelStack: 6672 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88132 kB' 'Slab: 168460 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.830 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:40.831 node0=1024 expecting 1024 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:40.831 00:05:40.831 real 0m1.006s 00:05:40.831 user 0m0.463s 00:05:40.831 sys 0m0.443s 00:05:40.831 ************************************ 00:05:40.831 END TEST default_setup 00:05:40.831 ************************************ 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.831 14:23:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:40.831 14:23:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:40.831 14:23:53 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:40.831 14:23:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.831 14:23:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.831 14:23:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:40.831 ************************************ 00:05:40.831 START TEST per_node_1G_alloc 00:05:40.831 ************************************ 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.831 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.089 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.089 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.089 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.352 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:41.352 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:41.352 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:41.352 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:41.352 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:41.352 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:41.352 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:41.352 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:41.352 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:41.352 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:41.352 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:41.352 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:41.352 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7607456 kB' 'MemAvailable: 10530368 kB' 'Buffers: 2436 kB' 'Cached: 3123816 kB' 'SwapCached: 0 kB' 'Active: 494968 kB' 'Inactive: 2753560 kB' 'Active(anon): 132744 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123812 kB' 'Mapped: 48976 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168396 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80264 kB' 'KernelStack: 6696 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.353 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7607456 kB' 'MemAvailable: 10530368 kB' 'Buffers: 2436 kB' 'Cached: 3123816 kB' 'SwapCached: 0 kB' 'Active: 494568 kB' 'Inactive: 2753560 kB' 'Active(anon): 132344 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123456 kB' 'Mapped: 48720 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168460 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80328 kB' 'KernelStack: 6672 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.354 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.355 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7607456 kB' 'MemAvailable: 10530368 kB' 'Buffers: 2436 kB' 'Cached: 3123816 kB' 'SwapCached: 0 kB' 'Active: 494700 kB' 'Inactive: 2753560 kB' 'Active(anon): 132476 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 123580 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168464 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80332 kB' 'KernelStack: 6672 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.356 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.357 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.358 nr_hugepages=512 00:05:41.358 resv_hugepages=0 00:05:41.358 surplus_hugepages=0 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:41.358 anon_hugepages=0 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7607456 kB' 'MemAvailable: 10530368 kB' 'Buffers: 2436 kB' 'Cached: 3123816 kB' 'SwapCached: 0 kB' 'Active: 494512 kB' 'Inactive: 2753560 kB' 'Active(anon): 132288 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 123440 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168460 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80328 kB' 'KernelStack: 6688 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.358 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7607716 kB' 'MemUsed: 4634264 kB' 'SwapCached: 0 kB' 'Active: 494376 kB' 'Inactive: 2753560 kB' 'Active(anon): 132152 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'FilePages: 3126252 kB' 'Mapped: 48728 kB' 'AnonPages: 123300 kB' 'Shmem: 10468 kB' 'KernelStack: 6640 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88132 kB' 'Slab: 168432 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.361 node0=512 expecting 512 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.361 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:41.362 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:41.362 00:05:41.362 real 0m0.499s 00:05:41.362 user 0m0.239s 00:05:41.362 sys 0m0.264s 00:05:41.362 ************************************ 00:05:41.362 END TEST per_node_1G_alloc 00:05:41.362 ************************************ 00:05:41.362 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.362 14:23:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:41.362 14:23:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:41.362 14:23:53 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:41.362 14:23:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.362 14:23:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.362 14:23:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:41.362 ************************************ 00:05:41.362 START TEST even_2G_alloc 00:05:41.362 ************************************ 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.362 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.884 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.884 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6557384 kB' 'MemAvailable: 9480300 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 495440 kB' 'Inactive: 2753564 kB' 'Active(anon): 133216 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 124396 kB' 'Mapped: 49528 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168468 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80336 kB' 'KernelStack: 6724 kB' 'PageTables: 4604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.884 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.885 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6557132 kB' 'MemAvailable: 9480044 kB' 'Buffers: 2436 kB' 'Cached: 3123816 kB' 'SwapCached: 0 kB' 'Active: 494476 kB' 'Inactive: 2753560 kB' 'Active(anon): 132252 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123528 kB' 'Mapped: 48616 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168456 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80324 kB' 'KernelStack: 6688 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.886 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6557276 kB' 'MemAvailable: 9480192 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494700 kB' 'Inactive: 2753564 kB' 'Active(anon): 132476 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123380 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168452 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80320 kB' 'KernelStack: 6656 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.887 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.888 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.889 nr_hugepages=1024 00:05:41.889 resv_hugepages=0 00:05:41.889 surplus_hugepages=0 00:05:41.889 anon_hugepages=0 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6557276 kB' 'MemAvailable: 9480192 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494552 kB' 'Inactive: 2753564 kB' 'Active(anon): 132328 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123464 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168448 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80316 kB' 'KernelStack: 6688 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.889 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.890 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6557276 kB' 'MemUsed: 5684704 kB' 'SwapCached: 0 kB' 'Active: 494576 kB' 'Inactive: 2753564 kB' 'Active(anon): 132352 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 3126256 kB' 'Mapped: 48732 kB' 'AnonPages: 123464 kB' 'Shmem: 10468 kB' 'KernelStack: 6688 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88132 kB' 'Slab: 168448 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.891 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.892 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:42.150 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.150 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.150 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:42.151 node0=1024 expecting 1024 00:05:42.151 ************************************ 00:05:42.151 END TEST even_2G_alloc 00:05:42.151 ************************************ 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:42.151 00:05:42.151 real 0m0.548s 00:05:42.151 user 0m0.253s 00:05:42.151 sys 0m0.276s 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.151 14:23:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:42.151 14:23:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:42.151 14:23:54 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:42.151 14:23:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.151 14:23:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.151 14:23:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:42.151 ************************************ 00:05:42.151 START TEST odd_alloc 00:05:42.151 ************************************ 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.151 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.414 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.414 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6552212 kB' 'MemAvailable: 9475128 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494808 kB' 'Inactive: 2753564 kB' 'Active(anon): 132584 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123500 kB' 'Mapped: 48860 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168456 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80324 kB' 'KernelStack: 6644 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.414 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.415 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6552644 kB' 'MemAvailable: 9475560 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494544 kB' 'Inactive: 2753564 kB' 'Active(anon): 132320 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123464 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168468 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80336 kB' 'KernelStack: 6688 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.416 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.417 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6553000 kB' 'MemAvailable: 9475916 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494488 kB' 'Inactive: 2753564 kB' 'Active(anon): 132264 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123424 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168472 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80340 kB' 'KernelStack: 6672 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.418 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.419 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:42.420 nr_hugepages=1025 00:05:42.420 resv_hugepages=0 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:42.420 surplus_hugepages=0 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:42.420 anon_hugepages=0 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6553000 kB' 'MemAvailable: 9475916 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494448 kB' 'Inactive: 2753564 kB' 'Active(anon): 132224 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123384 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168468 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80336 kB' 'KernelStack: 6672 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.420 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:42.421 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6553000 kB' 'MemUsed: 5688980 kB' 'SwapCached: 0 kB' 'Active: 494608 kB' 'Inactive: 2753564 kB' 'Active(anon): 132384 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 3126256 kB' 'Mapped: 48732 kB' 'AnonPages: 123608 kB' 'Shmem: 10468 kB' 'KernelStack: 6704 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88132 kB' 'Slab: 168468 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.422 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:42.423 node0=1025 expecting 1025 00:05:42.423 ************************************ 00:05:42.423 END TEST odd_alloc 00:05:42.423 ************************************ 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:42.423 00:05:42.423 real 0m0.477s 00:05:42.423 user 0m0.225s 00:05:42.423 sys 0m0.252s 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.423 14:23:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:42.681 14:23:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:42.681 14:23:54 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:42.681 14:23:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.681 14:23:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.681 14:23:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:42.681 ************************************ 00:05:42.681 START TEST custom_alloc 00:05:42.681 ************************************ 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:42.681 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.682 14:23:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.943 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.943 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.943 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:42.943 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:42.943 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:42.943 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:42.943 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:42.943 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:42.943 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:42.943 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:42.943 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7607328 kB' 'MemAvailable: 10530244 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494996 kB' 'Inactive: 2753564 kB' 'Active(anon): 132772 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123888 kB' 'Mapped: 48816 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168492 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80360 kB' 'KernelStack: 6708 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.944 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7607328 kB' 'MemAvailable: 10530244 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494496 kB' 'Inactive: 2753564 kB' 'Active(anon): 132272 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123384 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168484 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80352 kB' 'KernelStack: 6672 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.945 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.946 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7607328 kB' 'MemAvailable: 10530244 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494496 kB' 'Inactive: 2753564 kB' 'Active(anon): 132272 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123380 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168484 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80352 kB' 'KernelStack: 6672 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.947 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.948 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:42.949 nr_hugepages=512 00:05:42.949 resv_hugepages=0 00:05:42.949 surplus_hugepages=0 00:05:42.949 anon_hugepages=0 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7607328 kB' 'MemAvailable: 10530244 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494748 kB' 'Inactive: 2753564 kB' 'Active(anon): 132524 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123632 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168484 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80352 kB' 'KernelStack: 6672 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.949 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:42.950 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7607328 kB' 'MemUsed: 4634652 kB' 'SwapCached: 0 kB' 'Active: 494524 kB' 'Inactive: 2753564 kB' 'Active(anon): 132300 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 3126256 kB' 'Mapped: 48732 kB' 'AnonPages: 123404 kB' 'Shmem: 10468 kB' 'KernelStack: 6672 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88132 kB' 'Slab: 168480 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:42.951 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.210 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:43.211 node0=512 expecting 512 00:05:43.211 ************************************ 00:05:43.211 END TEST custom_alloc 00:05:43.211 ************************************ 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:43.211 00:05:43.211 real 0m0.525s 00:05:43.211 user 0m0.264s 00:05:43.211 sys 0m0.243s 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.211 14:23:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:43.211 14:23:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:43.211 14:23:55 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:43.211 14:23:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.211 14:23:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.211 14:23:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:43.211 ************************************ 00:05:43.211 START TEST no_shrink_alloc 00:05:43.211 ************************************ 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.211 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.474 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.474 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.474 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6556024 kB' 'MemAvailable: 9478940 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494604 kB' 'Inactive: 2753564 kB' 'Active(anon): 132380 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123780 kB' 'Mapped: 48876 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168440 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80308 kB' 'KernelStack: 6628 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.474 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.475 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6555772 kB' 'MemAvailable: 9478688 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494532 kB' 'Inactive: 2753564 kB' 'Active(anon): 132308 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123508 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168472 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80340 kB' 'KernelStack: 6688 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.476 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.477 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6555772 kB' 'MemAvailable: 9478688 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494340 kB' 'Inactive: 2753564 kB' 'Active(anon): 132116 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123516 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168472 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80340 kB' 'KernelStack: 6688 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.478 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.479 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.480 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:43.481 nr_hugepages=1024 00:05:43.481 resv_hugepages=0 00:05:43.481 surplus_hugepages=0 00:05:43.481 anon_hugepages=0 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.481 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.482 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6555772 kB' 'MemAvailable: 9478688 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494244 kB' 'Inactive: 2753564 kB' 'Active(anon): 132020 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123424 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168468 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80336 kB' 'KernelStack: 6672 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.742 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.743 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6557128 kB' 'MemUsed: 5684852 kB' 'SwapCached: 0 kB' 'Active: 494312 kB' 'Inactive: 2753564 kB' 'Active(anon): 132088 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 3126256 kB' 'Mapped: 48732 kB' 'AnonPages: 123376 kB' 'Shmem: 10468 kB' 'KernelStack: 6704 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88132 kB' 'Slab: 168468 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.744 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.745 node0=1024 expecting 1024 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.745 14:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.007 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.007 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.007 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6556656 kB' 'MemAvailable: 9479572 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 495512 kB' 'Inactive: 2753564 kB' 'Active(anon): 133288 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 124404 kB' 'Mapped: 48928 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168512 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80380 kB' 'KernelStack: 6772 kB' 'PageTables: 4740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.007 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6556532 kB' 'MemAvailable: 9479448 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494528 kB' 'Inactive: 2753564 kB' 'Active(anon): 132304 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123408 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168480 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80348 kB' 'KernelStack: 6672 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.008 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:44.009 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6557312 kB' 'MemAvailable: 9480228 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494676 kB' 'Inactive: 2753564 kB' 'Active(anon): 132452 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123572 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168468 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80336 kB' 'KernelStack: 6704 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.010 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:44.011 nr_hugepages=1024 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:44.011 resv_hugepages=0 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:44.011 surplus_hugepages=0 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:44.011 anon_hugepages=0 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6557684 kB' 'MemAvailable: 9480600 kB' 'Buffers: 2436 kB' 'Cached: 3123820 kB' 'SwapCached: 0 kB' 'Active: 494684 kB' 'Inactive: 2753564 kB' 'Active(anon): 132460 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123568 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 88132 kB' 'Slab: 168464 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80332 kB' 'KernelStack: 6672 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.011 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.012 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6558004 kB' 'MemUsed: 5683976 kB' 'SwapCached: 0 kB' 'Active: 494320 kB' 'Inactive: 2753564 kB' 'Active(anon): 132096 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2753564 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 3126256 kB' 'Mapped: 48732 kB' 'AnonPages: 123512 kB' 'Shmem: 10468 kB' 'KernelStack: 6688 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88132 kB' 'Slab: 168460 kB' 'SReclaimable: 88132 kB' 'SUnreclaim: 80328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.410 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:44.411 node0=1024 expecting 1024 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:44.411 14:23:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:44.411 00:05:44.411 real 0m1.043s 00:05:44.411 user 0m0.521s 00:05:44.411 sys 0m0.509s 00:05:44.411 ************************************ 00:05:44.411 END TEST no_shrink_alloc 00:05:44.412 ************************************ 00:05:44.412 14:23:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.412 14:23:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:44.412 14:23:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:44.412 14:23:56 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:44.412 14:23:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:44.412 14:23:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:44.412 14:23:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:44.412 14:23:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:44.412 14:23:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:44.412 14:23:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:44.412 14:23:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:44.412 14:23:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:44.412 ************************************ 00:05:44.412 END TEST hugepages 00:05:44.412 ************************************ 00:05:44.412 00:05:44.412 real 0m4.491s 00:05:44.412 user 0m2.113s 00:05:44.412 sys 0m2.217s 00:05:44.412 14:23:56 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.412 14:23:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:44.412 14:23:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:44.412 14:23:56 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:44.412 14:23:56 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.412 14:23:56 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.412 14:23:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:44.412 ************************************ 00:05:44.412 START TEST driver 00:05:44.412 ************************************ 00:05:44.412 14:23:56 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:44.412 * Looking for test storage... 00:05:44.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:44.412 14:23:56 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:44.412 14:23:56 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:44.412 14:23:56 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:44.978 14:23:57 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:44.978 14:23:57 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.978 14:23:57 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.978 14:23:57 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:44.978 ************************************ 00:05:44.978 START TEST guess_driver 00:05:44.978 ************************************ 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:44.978 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:44.978 Looking for driver=uio_pci_generic 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:44.978 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:44.979 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:44.979 14:23:57 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.979 14:23:57 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:45.545 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:45.545 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:45.545 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:45.545 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:45.545 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:45.545 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:45.545 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:45.545 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:45.545 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:45.804 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:45.804 14:23:57 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:45.804 14:23:57 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:45.804 14:23:57 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:46.370 00:05:46.370 real 0m1.343s 00:05:46.370 user 0m0.454s 00:05:46.370 sys 0m0.893s 00:05:46.370 ************************************ 00:05:46.370 END TEST guess_driver 00:05:46.370 ************************************ 00:05:46.370 14:23:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.370 14:23:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:46.370 14:23:58 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:46.370 00:05:46.370 real 0m1.985s 00:05:46.370 user 0m0.674s 00:05:46.370 sys 0m1.357s 00:05:46.370 ************************************ 00:05:46.370 END TEST driver 00:05:46.370 ************************************ 00:05:46.370 14:23:58 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.370 14:23:58 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:46.370 14:23:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:46.370 14:23:58 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:46.370 14:23:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.370 14:23:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.370 14:23:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:46.370 ************************************ 00:05:46.370 START TEST devices 00:05:46.370 ************************************ 00:05:46.370 14:23:58 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:46.370 * Looking for test storage... 00:05:46.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:46.370 14:23:58 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:46.370 14:23:58 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:46.370 14:23:58 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:46.371 14:23:58 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:46.936 14:23:59 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:46.936 14:23:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:46.936 14:23:59 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:46.936 14:23:59 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:46.936 14:23:59 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:46.936 14:23:59 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:46.936 14:23:59 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:46.936 14:23:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:46.936 14:23:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:46.936 14:23:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:46.936 14:23:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:46.936 14:23:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:46.936 14:23:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:46.936 14:23:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:46.936 14:23:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:47.195 No valid GPT data, bailing 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:47.195 14:23:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:47.195 14:23:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:47.195 14:23:59 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:47.195 No valid GPT data, bailing 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:47.195 14:23:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:47.195 14:23:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:47.195 14:23:59 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:47.195 No valid GPT data, bailing 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:47.195 14:23:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:47.195 14:23:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:47.195 14:23:59 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:47.195 14:23:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:47.195 14:23:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:47.453 No valid GPT data, bailing 00:05:47.453 14:23:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:47.453 14:23:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:47.453 14:23:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:47.453 14:23:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:47.453 14:23:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:47.453 14:23:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:47.453 14:23:59 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:47.453 14:23:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:47.453 14:23:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:47.453 14:23:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:47.453 14:23:59 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:47.454 14:23:59 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:47.454 14:23:59 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:47.454 14:23:59 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.454 14:23:59 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.454 14:23:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:47.454 ************************************ 00:05:47.454 START TEST nvme_mount 00:05:47.454 ************************************ 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:47.454 14:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:48.389 Creating new GPT entries in memory. 00:05:48.389 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:48.389 other utilities. 00:05:48.389 14:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:48.389 14:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:48.389 14:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:48.389 14:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:48.389 14:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:49.330 Creating new GPT entries in memory. 00:05:49.330 The operation has completed successfully. 00:05:49.330 14:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:49.330 14:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:49.330 14:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 72745 00:05:49.330 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.331 14:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:49.589 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.589 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:49.589 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:49.589 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.589 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.589 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.848 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.848 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.848 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.848 14:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.848 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.848 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:49.848 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.848 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:49.848 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:49.848 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:49.848 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.848 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.848 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.848 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:49.848 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:49.848 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:49.848 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:50.107 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:50.107 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:50.107 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:50.107 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.107 14:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:50.366 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.366 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:50.366 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:50.366 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.366 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.366 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.624 14:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:50.892 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.892 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:50.892 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:50.892 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.892 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.892 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.892 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.892 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.153 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.153 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.153 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:51.153 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:51.153 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:51.153 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:51.154 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:51.154 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.154 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:51.154 14:24:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:51.154 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:51.154 00:05:51.154 real 0m3.798s 00:05:51.154 user 0m0.663s 00:05:51.154 sys 0m0.890s 00:05:51.154 14:24:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.154 14:24:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 ************************************ 00:05:51.154 END TEST nvme_mount 00:05:51.154 ************************************ 00:05:51.154 14:24:03 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:51.154 14:24:03 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:51.154 14:24:03 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.154 14:24:03 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.154 14:24:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 ************************************ 00:05:51.154 START TEST dm_mount 00:05:51.154 ************************************ 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:51.154 14:24:03 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:52.529 Creating new GPT entries in memory. 00:05:52.529 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:52.529 other utilities. 00:05:52.529 14:24:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:52.529 14:24:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:52.529 14:24:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:52.529 14:24:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:52.529 14:24:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:53.465 Creating new GPT entries in memory. 00:05:53.465 The operation has completed successfully. 00:05:53.465 14:24:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:53.465 14:24:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:53.465 14:24:05 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:53.465 14:24:05 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:53.465 14:24:05 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:54.401 The operation has completed successfully. 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 73178 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:54.401 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:54.671 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.671 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.671 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.671 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.671 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.671 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.671 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.671 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.671 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:54.671 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.930 14:24:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:54.930 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.930 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:54.930 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:54.930 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.930 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.930 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:55.189 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:55.189 00:05:55.189 real 0m4.110s 00:05:55.189 user 0m0.468s 00:05:55.189 sys 0m0.615s 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.189 ************************************ 00:05:55.189 14:24:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:55.189 END TEST dm_mount 00:05:55.189 ************************************ 00:05:55.447 14:24:07 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:55.447 14:24:07 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:55.447 14:24:07 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:55.447 14:24:07 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:55.447 14:24:07 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.447 14:24:07 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:55.447 14:24:07 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:55.447 14:24:07 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:55.706 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:55.706 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:55.706 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:55.706 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:55.706 14:24:07 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:55.706 14:24:07 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.706 14:24:07 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:55.706 14:24:07 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.706 14:24:07 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:55.706 14:24:07 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:55.706 14:24:07 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:55.706 00:05:55.706 real 0m9.326s 00:05:55.706 user 0m1.732s 00:05:55.706 sys 0m2.038s 00:05:55.706 14:24:07 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.706 14:24:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:55.706 ************************************ 00:05:55.706 END TEST devices 00:05:55.706 ************************************ 00:05:55.706 14:24:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:55.706 00:05:55.706 real 0m20.264s 00:05:55.706 user 0m6.499s 00:05:55.706 sys 0m8.040s 00:05:55.706 14:24:07 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.706 14:24:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:55.706 ************************************ 00:05:55.706 END TEST setup.sh 00:05:55.706 ************************************ 00:05:55.706 14:24:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:55.706 14:24:07 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:56.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:56.272 Hugepages 00:05:56.272 node hugesize free / total 00:05:56.272 node0 1048576kB 0 / 0 00:05:56.272 node0 2048kB 2048 / 2048 00:05:56.272 00:05:56.272 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:56.272 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:56.530 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:56.530 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:56.530 14:24:08 -- spdk/autotest.sh@130 -- # uname -s 00:05:56.530 14:24:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:56.530 14:24:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:56.530 14:24:08 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:57.097 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:57.097 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.355 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.355 14:24:09 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:58.290 14:24:10 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:58.290 14:24:10 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:58.290 14:24:10 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:58.290 14:24:10 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:58.290 14:24:10 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:58.290 14:24:10 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:58.290 14:24:10 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:58.290 14:24:10 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:58.290 14:24:10 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:58.290 14:24:10 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:58.290 14:24:10 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:58.290 14:24:10 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:58.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.856 Waiting for block devices as requested 00:05:58.856 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:58.856 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:58.856 14:24:11 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:58.856 14:24:11 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:58.857 14:24:11 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:58.857 14:24:11 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:58.857 14:24:11 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:58.857 14:24:11 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:58.857 14:24:11 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:58.857 14:24:11 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:58.857 14:24:11 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:58.857 14:24:11 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:58.857 14:24:11 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:58.857 14:24:11 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:58.857 14:24:11 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:58.857 14:24:11 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:58.857 14:24:11 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:58.857 14:24:11 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:58.857 14:24:11 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:58.857 14:24:11 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:58.857 14:24:11 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:58.857 14:24:11 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:58.857 14:24:11 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:58.857 14:24:11 -- common/autotest_common.sh@1557 -- # continue 00:05:58.857 14:24:11 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:58.857 14:24:11 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:58.857 14:24:11 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:58.857 14:24:11 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:58.857 14:24:11 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:58.857 14:24:11 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:58.857 14:24:11 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:58.857 14:24:11 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:58.857 14:24:11 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:58.857 14:24:11 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:58.857 14:24:11 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:58.857 14:24:11 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:58.857 14:24:11 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:58.857 14:24:11 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:58.857 14:24:11 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:58.857 14:24:11 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:58.857 14:24:11 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:58.857 14:24:11 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:58.857 14:24:11 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:58.857 14:24:11 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:58.857 14:24:11 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:58.857 14:24:11 -- common/autotest_common.sh@1557 -- # continue 00:05:58.857 14:24:11 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:58.857 14:24:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.857 14:24:11 -- common/autotest_common.sh@10 -- # set +x 00:05:59.115 14:24:11 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:59.115 14:24:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.115 14:24:11 -- common/autotest_common.sh@10 -- # set +x 00:05:59.115 14:24:11 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:59.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:59.681 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:59.681 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:59.958 14:24:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:59.958 14:24:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.958 14:24:11 -- common/autotest_common.sh@10 -- # set +x 00:05:59.958 14:24:12 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:59.958 14:24:12 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:59.958 14:24:12 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:59.958 14:24:12 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:59.958 14:24:12 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:59.958 14:24:12 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:59.958 14:24:12 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:59.958 14:24:12 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:59.958 14:24:12 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:59.958 14:24:12 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:59.958 14:24:12 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:59.958 14:24:12 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:59.958 14:24:12 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:59.958 14:24:12 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:59.958 14:24:12 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:59.958 14:24:12 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:59.958 14:24:12 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:59.958 14:24:12 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:59.958 14:24:12 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:59.958 14:24:12 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:59.958 14:24:12 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:59.958 14:24:12 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:59.958 14:24:12 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:59.958 14:24:12 -- common/autotest_common.sh@1593 -- # return 0 00:05:59.958 14:24:12 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:59.958 14:24:12 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:59.958 14:24:12 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:59.959 14:24:12 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:59.959 14:24:12 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:59.959 14:24:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.959 14:24:12 -- common/autotest_common.sh@10 -- # set +x 00:05:59.959 14:24:12 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:59.959 14:24:12 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:59.959 14:24:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.959 14:24:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.959 14:24:12 -- common/autotest_common.sh@10 -- # set +x 00:05:59.959 ************************************ 00:05:59.959 START TEST env 00:05:59.959 ************************************ 00:05:59.959 14:24:12 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:59.959 * Looking for test storage... 00:05:59.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:59.959 14:24:12 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:59.959 14:24:12 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.959 14:24:12 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.959 14:24:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.959 ************************************ 00:05:59.959 START TEST env_memory 00:05:59.959 ************************************ 00:05:59.959 14:24:12 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:59.959 00:05:59.959 00:05:59.959 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.959 http://cunit.sourceforge.net/ 00:05:59.959 00:05:59.959 00:05:59.959 Suite: memory 00:05:59.959 Test: alloc and free memory map ...[2024-07-10 14:24:12.221321] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:59.959 passed 00:06:00.218 Test: mem map translation ...[2024-07-10 14:24:12.247727] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:00.218 [2024-07-10 14:24:12.247978] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:00.218 [2024-07-10 14:24:12.248116] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:00.218 [2024-07-10 14:24:12.248207] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:00.218 passed 00:06:00.218 Test: mem map registration ...[2024-07-10 14:24:12.302072] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:00.218 [2024-07-10 14:24:12.302326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:00.218 passed 00:06:00.218 Test: mem map adjacent registrations ...passed 00:06:00.218 00:06:00.218 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.218 suites 1 1 n/a 0 0 00:06:00.218 tests 4 4 4 0 0 00:06:00.218 asserts 152 152 152 0 n/a 00:06:00.218 00:06:00.218 Elapsed time = 0.180 seconds 00:06:00.218 00:06:00.218 real 0m0.194s 00:06:00.218 user 0m0.176s 00:06:00.218 sys 0m0.013s 00:06:00.218 14:24:12 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.218 14:24:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:00.218 ************************************ 00:06:00.218 END TEST env_memory 00:06:00.218 ************************************ 00:06:00.218 14:24:12 env -- common/autotest_common.sh@1142 -- # return 0 00:06:00.218 14:24:12 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:00.218 14:24:12 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.218 14:24:12 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.218 14:24:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.218 ************************************ 00:06:00.218 START TEST env_vtophys 00:06:00.218 ************************************ 00:06:00.218 14:24:12 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:00.218 EAL: lib.eal log level changed from notice to debug 00:06:00.218 EAL: Detected lcore 0 as core 0 on socket 0 00:06:00.218 EAL: Detected lcore 1 as core 0 on socket 0 00:06:00.218 EAL: Detected lcore 2 as core 0 on socket 0 00:06:00.218 EAL: Detected lcore 3 as core 0 on socket 0 00:06:00.218 EAL: Detected lcore 4 as core 0 on socket 0 00:06:00.218 EAL: Detected lcore 5 as core 0 on socket 0 00:06:00.218 EAL: Detected lcore 6 as core 0 on socket 0 00:06:00.218 EAL: Detected lcore 7 as core 0 on socket 0 00:06:00.218 EAL: Detected lcore 8 as core 0 on socket 0 00:06:00.218 EAL: Detected lcore 9 as core 0 on socket 0 00:06:00.218 EAL: Maximum logical cores by configuration: 128 00:06:00.218 EAL: Detected CPU lcores: 10 00:06:00.218 EAL: Detected NUMA nodes: 1 00:06:00.218 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:06:00.218 EAL: Detected shared linkage of DPDK 00:06:00.218 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:06:00.218 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:06:00.218 EAL: Registered [vdev] bus. 00:06:00.218 EAL: bus.vdev log level changed from disabled to notice 00:06:00.218 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:06:00.218 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:06:00.218 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:00.218 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:00.218 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:06:00.218 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:06:00.218 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:06:00.218 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:06:00.218 EAL: No shared files mode enabled, IPC will be disabled 00:06:00.218 EAL: No shared files mode enabled, IPC is disabled 00:06:00.218 EAL: Selected IOVA mode 'PA' 00:06:00.218 EAL: Probing VFIO support... 00:06:00.218 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:00.218 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:00.218 EAL: Ask a virtual area of 0x2e000 bytes 00:06:00.218 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:00.218 EAL: Setting up physically contiguous memory... 00:06:00.218 EAL: Setting maximum number of open files to 524288 00:06:00.218 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:00.218 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:00.218 EAL: Ask a virtual area of 0x61000 bytes 00:06:00.218 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:00.218 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:00.218 EAL: Ask a virtual area of 0x400000000 bytes 00:06:00.218 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:00.218 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:00.218 EAL: Ask a virtual area of 0x61000 bytes 00:06:00.218 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:00.218 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:00.218 EAL: Ask a virtual area of 0x400000000 bytes 00:06:00.218 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:00.218 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:00.218 EAL: Ask a virtual area of 0x61000 bytes 00:06:00.218 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:00.218 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:00.218 EAL: Ask a virtual area of 0x400000000 bytes 00:06:00.218 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:00.218 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:00.218 EAL: Ask a virtual area of 0x61000 bytes 00:06:00.218 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:00.218 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:00.218 EAL: Ask a virtual area of 0x400000000 bytes 00:06:00.218 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:00.218 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:00.218 EAL: Hugepages will be freed exactly as allocated. 00:06:00.219 EAL: No shared files mode enabled, IPC is disabled 00:06:00.219 EAL: No shared files mode enabled, IPC is disabled 00:06:00.476 EAL: TSC frequency is ~2200000 KHz 00:06:00.476 EAL: Main lcore 0 is ready (tid=7f1b96c1aa00;cpuset=[0]) 00:06:00.476 EAL: Trying to obtain current memory policy. 00:06:00.476 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.476 EAL: Restoring previous memory policy: 0 00:06:00.476 EAL: request: mp_malloc_sync 00:06:00.476 EAL: No shared files mode enabled, IPC is disabled 00:06:00.476 EAL: Heap on socket 0 was expanded by 2MB 00:06:00.476 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:00.476 EAL: No shared files mode enabled, IPC is disabled 00:06:00.476 EAL: Mem event callback 'spdk:(nil)' registered 00:06:00.477 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:00.477 00:06:00.477 00:06:00.477 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.477 http://cunit.sourceforge.net/ 00:06:00.477 00:06:00.477 00:06:00.477 Suite: components_suite 00:06:00.477 Test: vtophys_malloc_test ...passed 00:06:00.477 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:00.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.477 EAL: Restoring previous memory policy: 4 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was expanded by 4MB 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was shrunk by 4MB 00:06:00.477 EAL: Trying to obtain current memory policy. 00:06:00.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.477 EAL: Restoring previous memory policy: 4 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was expanded by 6MB 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was shrunk by 6MB 00:06:00.477 EAL: Trying to obtain current memory policy. 00:06:00.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.477 EAL: Restoring previous memory policy: 4 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was expanded by 10MB 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was shrunk by 10MB 00:06:00.477 EAL: Trying to obtain current memory policy. 00:06:00.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.477 EAL: Restoring previous memory policy: 4 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was expanded by 18MB 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was shrunk by 18MB 00:06:00.477 EAL: Trying to obtain current memory policy. 00:06:00.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.477 EAL: Restoring previous memory policy: 4 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was expanded by 34MB 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was shrunk by 34MB 00:06:00.477 EAL: Trying to obtain current memory policy. 00:06:00.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.477 EAL: Restoring previous memory policy: 4 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was expanded by 66MB 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was shrunk by 66MB 00:06:00.477 EAL: Trying to obtain current memory policy. 00:06:00.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.477 EAL: Restoring previous memory policy: 4 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was expanded by 130MB 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was shrunk by 130MB 00:06:00.477 EAL: Trying to obtain current memory policy. 00:06:00.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.477 EAL: Restoring previous memory policy: 4 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was expanded by 258MB 00:06:00.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.477 EAL: request: mp_malloc_sync 00:06:00.477 EAL: No shared files mode enabled, IPC is disabled 00:06:00.477 EAL: Heap on socket 0 was shrunk by 258MB 00:06:00.477 EAL: Trying to obtain current memory policy. 00:06:00.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.735 EAL: Restoring previous memory policy: 4 00:06:00.735 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.735 EAL: request: mp_malloc_sync 00:06:00.735 EAL: No shared files mode enabled, IPC is disabled 00:06:00.735 EAL: Heap on socket 0 was expanded by 514MB 00:06:00.735 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.735 EAL: request: mp_malloc_sync 00:06:00.735 EAL: No shared files mode enabled, IPC is disabled 00:06:00.735 EAL: Heap on socket 0 was shrunk by 514MB 00:06:00.735 EAL: Trying to obtain current memory policy. 00:06:00.735 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.993 EAL: Restoring previous memory policy: 4 00:06:00.993 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.993 EAL: request: mp_malloc_sync 00:06:00.993 EAL: No shared files mode enabled, IPC is disabled 00:06:00.993 EAL: Heap on socket 0 was expanded by 1026MB 00:06:00.993 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.993 passed 00:06:00.993 00:06:00.993 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.993 suites 1 1 n/a 0 0 00:06:00.993 tests 2 2 2 0 0 00:06:00.993 asserts 5274 5274 5274 0 n/a 00:06:00.993 00:06:00.993 Elapsed time = 0.662 seconds 00:06:00.993 EAL: request: mp_malloc_sync 00:06:00.993 EAL: No shared files mode enabled, IPC is disabled 00:06:00.993 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:00.993 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.993 EAL: request: mp_malloc_sync 00:06:00.993 EAL: No shared files mode enabled, IPC is disabled 00:06:00.993 EAL: Heap on socket 0 was shrunk by 2MB 00:06:00.993 EAL: No shared files mode enabled, IPC is disabled 00:06:00.993 EAL: No shared files mode enabled, IPC is disabled 00:06:00.993 EAL: No shared files mode enabled, IPC is disabled 00:06:00.993 00:06:00.993 real 0m0.859s 00:06:00.993 user 0m0.430s 00:06:00.993 sys 0m0.298s 00:06:00.993 14:24:13 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.993 14:24:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:00.993 ************************************ 00:06:00.993 END TEST env_vtophys 00:06:00.993 ************************************ 00:06:01.251 14:24:13 env -- common/autotest_common.sh@1142 -- # return 0 00:06:01.251 14:24:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:01.251 14:24:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.251 14:24:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.251 14:24:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.251 ************************************ 00:06:01.251 START TEST env_pci 00:06:01.251 ************************************ 00:06:01.251 14:24:13 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:01.251 00:06:01.251 00:06:01.251 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.251 http://cunit.sourceforge.net/ 00:06:01.251 00:06:01.251 00:06:01.251 Suite: pci 00:06:01.251 Test: pci_hook ...[2024-07-10 14:24:13.327546] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 74360 has claimed it 00:06:01.251 passed 00:06:01.251 00:06:01.251 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.251 suites 1 1 n/a 0 0 00:06:01.251 tests 1 1 1 0 0 00:06:01.251 asserts 25 25 25 0 n/a 00:06:01.251 00:06:01.251 Elapsed time = 0.002 seconds 00:06:01.251 EAL: Cannot find device (10000:00:01.0) 00:06:01.251 EAL: Failed to attach device on primary process 00:06:01.251 00:06:01.251 real 0m0.020s 00:06:01.251 user 0m0.008s 00:06:01.251 sys 0m0.011s 00:06:01.251 14:24:13 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.251 14:24:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:01.251 ************************************ 00:06:01.251 END TEST env_pci 00:06:01.251 ************************************ 00:06:01.251 14:24:13 env -- common/autotest_common.sh@1142 -- # return 0 00:06:01.251 14:24:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:01.251 14:24:13 env -- env/env.sh@15 -- # uname 00:06:01.251 14:24:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:01.251 14:24:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:01.252 14:24:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:01.252 14:24:13 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:01.252 14:24:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.252 14:24:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.252 ************************************ 00:06:01.252 START TEST env_dpdk_post_init 00:06:01.252 ************************************ 00:06:01.252 14:24:13 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:01.252 EAL: Detected CPU lcores: 10 00:06:01.252 EAL: Detected NUMA nodes: 1 00:06:01.252 EAL: Detected shared linkage of DPDK 00:06:01.252 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:01.252 EAL: Selected IOVA mode 'PA' 00:06:01.510 Starting DPDK initialization... 00:06:01.510 Starting SPDK post initialization... 00:06:01.510 SPDK NVMe probe 00:06:01.510 Attaching to 0000:00:10.0 00:06:01.510 Attaching to 0000:00:11.0 00:06:01.510 Attached to 0000:00:10.0 00:06:01.510 Attached to 0000:00:11.0 00:06:01.510 Cleaning up... 00:06:01.510 00:06:01.510 real 0m0.186s 00:06:01.510 user 0m0.048s 00:06:01.510 sys 0m0.039s 00:06:01.510 14:24:13 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.510 14:24:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.510 ************************************ 00:06:01.510 END TEST env_dpdk_post_init 00:06:01.510 ************************************ 00:06:01.510 14:24:13 env -- common/autotest_common.sh@1142 -- # return 0 00:06:01.510 14:24:13 env -- env/env.sh@26 -- # uname 00:06:01.510 14:24:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:01.510 14:24:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.510 14:24:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.510 14:24:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.510 14:24:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.510 ************************************ 00:06:01.510 START TEST env_mem_callbacks 00:06:01.510 ************************************ 00:06:01.510 14:24:13 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.510 EAL: Detected CPU lcores: 10 00:06:01.510 EAL: Detected NUMA nodes: 1 00:06:01.510 EAL: Detected shared linkage of DPDK 00:06:01.510 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:01.510 EAL: Selected IOVA mode 'PA' 00:06:01.510 00:06:01.510 00:06:01.510 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.510 http://cunit.sourceforge.net/ 00:06:01.510 00:06:01.510 00:06:01.510 Suite: memory 00:06:01.510 Test: test ... 00:06:01.510 register 0x200000200000 2097152 00:06:01.510 malloc 3145728 00:06:01.510 register 0x200000400000 4194304 00:06:01.510 buf 0x200000500000 len 3145728 PASSED 00:06:01.510 malloc 64 00:06:01.510 buf 0x2000004fff40 len 64 PASSED 00:06:01.510 malloc 4194304 00:06:01.511 register 0x200000800000 6291456 00:06:01.511 buf 0x200000a00000 len 4194304 PASSED 00:06:01.511 free 0x200000500000 3145728 00:06:01.511 free 0x2000004fff40 64 00:06:01.511 unregister 0x200000400000 4194304 PASSED 00:06:01.511 free 0x200000a00000 4194304 00:06:01.511 unregister 0x200000800000 6291456 PASSED 00:06:01.511 malloc 8388608 00:06:01.511 register 0x200000400000 10485760 00:06:01.511 buf 0x200000600000 len 8388608 PASSED 00:06:01.511 free 0x200000600000 8388608 00:06:01.511 unregister 0x200000400000 10485760 PASSED 00:06:01.511 passed 00:06:01.511 00:06:01.511 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.511 suites 1 1 n/a 0 0 00:06:01.511 tests 1 1 1 0 0 00:06:01.511 asserts 15 15 15 0 n/a 00:06:01.511 00:06:01.511 Elapsed time = 0.005 seconds 00:06:01.511 00:06:01.511 real 0m0.142s 00:06:01.511 user 0m0.015s 00:06:01.511 sys 0m0.026s 00:06:01.511 14:24:13 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.511 14:24:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:01.511 ************************************ 00:06:01.511 END TEST env_mem_callbacks 00:06:01.511 ************************************ 00:06:01.511 14:24:13 env -- common/autotest_common.sh@1142 -- # return 0 00:06:01.511 00:06:01.511 real 0m1.697s 00:06:01.511 user 0m0.784s 00:06:01.511 sys 0m0.572s 00:06:01.769 14:24:13 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.769 ************************************ 00:06:01.769 14:24:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.769 END TEST env 00:06:01.769 ************************************ 00:06:01.769 14:24:13 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.769 14:24:13 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:01.769 14:24:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.769 14:24:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.769 14:24:13 -- common/autotest_common.sh@10 -- # set +x 00:06:01.769 ************************************ 00:06:01.769 START TEST rpc 00:06:01.769 ************************************ 00:06:01.769 14:24:13 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:01.769 * Looking for test storage... 00:06:01.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:01.769 14:24:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=74464 00:06:01.769 14:24:13 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:01.769 14:24:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.769 14:24:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 74464 00:06:01.769 14:24:13 rpc -- common/autotest_common.sh@829 -- # '[' -z 74464 ']' 00:06:01.769 14:24:13 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.769 14:24:13 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.769 14:24:13 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.769 14:24:13 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.769 14:24:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.769 [2024-07-10 14:24:13.984918] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:01.769 [2024-07-10 14:24:13.985643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74464 ] 00:06:02.027 [2024-07-10 14:24:14.110892] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.027 [2024-07-10 14:24:14.125760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.027 [2024-07-10 14:24:14.162397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:02.027 [2024-07-10 14:24:14.162457] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 74464' to capture a snapshot of events at runtime. 00:06:02.027 [2024-07-10 14:24:14.162470] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:02.027 [2024-07-10 14:24:14.162478] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:02.027 [2024-07-10 14:24:14.162485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid74464 for offline analysis/debug. 00:06:02.027 [2024-07-10 14:24:14.162519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.027 14:24:14 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.027 14:24:14 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:02.027 14:24:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:02.027 14:24:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:02.027 14:24:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:02.027 14:24:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:02.027 14:24:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.027 14:24:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.027 14:24:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.285 ************************************ 00:06:02.285 START TEST rpc_integrity 00:06:02.285 ************************************ 00:06:02.285 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:02.285 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:02.285 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.285 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.285 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.285 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:02.285 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:02.285 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:02.285 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:02.285 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.285 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.285 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.285 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:02.285 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:02.285 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.285 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.285 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.285 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:02.285 { 00:06:02.285 "aliases": [ 00:06:02.285 "c3b8e56f-d314-41b9-ba5a-c167498cabc3" 00:06:02.285 ], 00:06:02.285 "assigned_rate_limits": { 00:06:02.285 "r_mbytes_per_sec": 0, 00:06:02.285 "rw_ios_per_sec": 0, 00:06:02.285 "rw_mbytes_per_sec": 0, 00:06:02.285 "w_mbytes_per_sec": 0 00:06:02.285 }, 00:06:02.285 "block_size": 512, 00:06:02.285 "claimed": false, 00:06:02.285 "driver_specific": {}, 00:06:02.285 "memory_domains": [ 00:06:02.285 { 00:06:02.285 "dma_device_id": "system", 00:06:02.285 "dma_device_type": 1 00:06:02.285 }, 00:06:02.285 { 00:06:02.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.285 "dma_device_type": 2 00:06:02.285 } 00:06:02.285 ], 00:06:02.285 "name": "Malloc0", 00:06:02.285 "num_blocks": 16384, 00:06:02.285 "product_name": "Malloc disk", 00:06:02.285 "supported_io_types": { 00:06:02.285 "abort": true, 00:06:02.285 "compare": false, 00:06:02.285 "compare_and_write": false, 00:06:02.285 "copy": true, 00:06:02.285 "flush": true, 00:06:02.285 "get_zone_info": false, 00:06:02.285 "nvme_admin": false, 00:06:02.285 "nvme_io": false, 00:06:02.285 "nvme_io_md": false, 00:06:02.285 "nvme_iov_md": false, 00:06:02.285 "read": true, 00:06:02.285 "reset": true, 00:06:02.285 "seek_data": false, 00:06:02.285 "seek_hole": false, 00:06:02.285 "unmap": true, 00:06:02.285 "write": true, 00:06:02.285 "write_zeroes": true, 00:06:02.285 "zcopy": true, 00:06:02.285 "zone_append": false, 00:06:02.285 "zone_management": false 00:06:02.285 }, 00:06:02.285 "uuid": "c3b8e56f-d314-41b9-ba5a-c167498cabc3", 00:06:02.285 "zoned": false 00:06:02.285 } 00:06:02.285 ]' 00:06:02.286 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:02.286 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:02.286 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.286 [2024-07-10 14:24:14.464038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:02.286 [2024-07-10 14:24:14.464089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:02.286 [2024-07-10 14:24:14.464109] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105df40 00:06:02.286 [2024-07-10 14:24:14.464120] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:02.286 [2024-07-10 14:24:14.465660] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:02.286 [2024-07-10 14:24:14.465691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:02.286 Passthru0 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.286 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.286 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:02.286 { 00:06:02.286 "aliases": [ 00:06:02.286 "c3b8e56f-d314-41b9-ba5a-c167498cabc3" 00:06:02.286 ], 00:06:02.286 "assigned_rate_limits": { 00:06:02.286 "r_mbytes_per_sec": 0, 00:06:02.286 "rw_ios_per_sec": 0, 00:06:02.286 "rw_mbytes_per_sec": 0, 00:06:02.286 "w_mbytes_per_sec": 0 00:06:02.286 }, 00:06:02.286 "block_size": 512, 00:06:02.286 "claim_type": "exclusive_write", 00:06:02.286 "claimed": true, 00:06:02.286 "driver_specific": {}, 00:06:02.286 "memory_domains": [ 00:06:02.286 { 00:06:02.286 "dma_device_id": "system", 00:06:02.286 "dma_device_type": 1 00:06:02.286 }, 00:06:02.286 { 00:06:02.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.286 "dma_device_type": 2 00:06:02.286 } 00:06:02.286 ], 00:06:02.286 "name": "Malloc0", 00:06:02.286 "num_blocks": 16384, 00:06:02.286 "product_name": "Malloc disk", 00:06:02.286 "supported_io_types": { 00:06:02.286 "abort": true, 00:06:02.286 "compare": false, 00:06:02.286 "compare_and_write": false, 00:06:02.286 "copy": true, 00:06:02.286 "flush": true, 00:06:02.286 "get_zone_info": false, 00:06:02.286 "nvme_admin": false, 00:06:02.286 "nvme_io": false, 00:06:02.286 "nvme_io_md": false, 00:06:02.286 "nvme_iov_md": false, 00:06:02.286 "read": true, 00:06:02.286 "reset": true, 00:06:02.286 "seek_data": false, 00:06:02.286 "seek_hole": false, 00:06:02.286 "unmap": true, 00:06:02.286 "write": true, 00:06:02.286 "write_zeroes": true, 00:06:02.286 "zcopy": true, 00:06:02.286 "zone_append": false, 00:06:02.286 "zone_management": false 00:06:02.286 }, 00:06:02.286 "uuid": "c3b8e56f-d314-41b9-ba5a-c167498cabc3", 00:06:02.286 "zoned": false 00:06:02.286 }, 00:06:02.286 { 00:06:02.286 "aliases": [ 00:06:02.286 "ce2defff-44e9-5e6b-b008-9b38f937411d" 00:06:02.286 ], 00:06:02.286 "assigned_rate_limits": { 00:06:02.286 "r_mbytes_per_sec": 0, 00:06:02.286 "rw_ios_per_sec": 0, 00:06:02.286 "rw_mbytes_per_sec": 0, 00:06:02.286 "w_mbytes_per_sec": 0 00:06:02.286 }, 00:06:02.286 "block_size": 512, 00:06:02.286 "claimed": false, 00:06:02.286 "driver_specific": { 00:06:02.286 "passthru": { 00:06:02.286 "base_bdev_name": "Malloc0", 00:06:02.286 "name": "Passthru0" 00:06:02.286 } 00:06:02.286 }, 00:06:02.286 "memory_domains": [ 00:06:02.286 { 00:06:02.286 "dma_device_id": "system", 00:06:02.286 "dma_device_type": 1 00:06:02.286 }, 00:06:02.286 { 00:06:02.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.286 "dma_device_type": 2 00:06:02.286 } 00:06:02.286 ], 00:06:02.286 "name": "Passthru0", 00:06:02.286 "num_blocks": 16384, 00:06:02.286 "product_name": "passthru", 00:06:02.286 "supported_io_types": { 00:06:02.286 "abort": true, 00:06:02.286 "compare": false, 00:06:02.286 "compare_and_write": false, 00:06:02.286 "copy": true, 00:06:02.286 "flush": true, 00:06:02.286 "get_zone_info": false, 00:06:02.286 "nvme_admin": false, 00:06:02.286 "nvme_io": false, 00:06:02.286 "nvme_io_md": false, 00:06:02.286 "nvme_iov_md": false, 00:06:02.286 "read": true, 00:06:02.286 "reset": true, 00:06:02.286 "seek_data": false, 00:06:02.286 "seek_hole": false, 00:06:02.286 "unmap": true, 00:06:02.286 "write": true, 00:06:02.286 "write_zeroes": true, 00:06:02.286 "zcopy": true, 00:06:02.286 "zone_append": false, 00:06:02.286 "zone_management": false 00:06:02.286 }, 00:06:02.286 "uuid": "ce2defff-44e9-5e6b-b008-9b38f937411d", 00:06:02.286 "zoned": false 00:06:02.286 } 00:06:02.286 ]' 00:06:02.286 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:02.286 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:02.286 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.286 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.286 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.286 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.286 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:02.286 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:02.544 14:24:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:02.544 00:06:02.544 real 0m0.299s 00:06:02.544 user 0m0.192s 00:06:02.544 sys 0m0.033s 00:06:02.544 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.544 14:24:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.544 ************************************ 00:06:02.544 END TEST rpc_integrity 00:06:02.544 ************************************ 00:06:02.544 14:24:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:02.544 14:24:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:02.544 14:24:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.544 14:24:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.544 14:24:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.544 ************************************ 00:06:02.544 START TEST rpc_plugins 00:06:02.544 ************************************ 00:06:02.544 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:02.544 14:24:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:02.544 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.544 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.544 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.544 14:24:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:02.544 14:24:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:02.544 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.544 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.545 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.545 14:24:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:02.545 { 00:06:02.545 "aliases": [ 00:06:02.545 "f5311e78-88e8-414a-ac3b-d41bc31a7739" 00:06:02.545 ], 00:06:02.545 "assigned_rate_limits": { 00:06:02.545 "r_mbytes_per_sec": 0, 00:06:02.545 "rw_ios_per_sec": 0, 00:06:02.545 "rw_mbytes_per_sec": 0, 00:06:02.545 "w_mbytes_per_sec": 0 00:06:02.545 }, 00:06:02.545 "block_size": 4096, 00:06:02.545 "claimed": false, 00:06:02.545 "driver_specific": {}, 00:06:02.545 "memory_domains": [ 00:06:02.545 { 00:06:02.545 "dma_device_id": "system", 00:06:02.545 "dma_device_type": 1 00:06:02.545 }, 00:06:02.545 { 00:06:02.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.545 "dma_device_type": 2 00:06:02.545 } 00:06:02.545 ], 00:06:02.545 "name": "Malloc1", 00:06:02.545 "num_blocks": 256, 00:06:02.545 "product_name": "Malloc disk", 00:06:02.545 "supported_io_types": { 00:06:02.545 "abort": true, 00:06:02.545 "compare": false, 00:06:02.545 "compare_and_write": false, 00:06:02.545 "copy": true, 00:06:02.545 "flush": true, 00:06:02.545 "get_zone_info": false, 00:06:02.545 "nvme_admin": false, 00:06:02.545 "nvme_io": false, 00:06:02.545 "nvme_io_md": false, 00:06:02.545 "nvme_iov_md": false, 00:06:02.545 "read": true, 00:06:02.545 "reset": true, 00:06:02.545 "seek_data": false, 00:06:02.545 "seek_hole": false, 00:06:02.545 "unmap": true, 00:06:02.545 "write": true, 00:06:02.545 "write_zeroes": true, 00:06:02.545 "zcopy": true, 00:06:02.545 "zone_append": false, 00:06:02.545 "zone_management": false 00:06:02.545 }, 00:06:02.545 "uuid": "f5311e78-88e8-414a-ac3b-d41bc31a7739", 00:06:02.545 "zoned": false 00:06:02.545 } 00:06:02.545 ]' 00:06:02.545 14:24:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:02.545 14:24:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:02.545 14:24:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:02.545 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.545 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.545 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.545 14:24:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:02.545 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.545 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.545 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.545 14:24:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:02.545 14:24:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:02.545 14:24:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:02.545 00:06:02.545 real 0m0.158s 00:06:02.545 user 0m0.104s 00:06:02.545 sys 0m0.019s 00:06:02.545 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.545 14:24:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.545 ************************************ 00:06:02.545 END TEST rpc_plugins 00:06:02.545 ************************************ 00:06:02.803 14:24:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:02.803 14:24:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:02.803 14:24:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.803 14:24:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.803 14:24:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.803 ************************************ 00:06:02.803 START TEST rpc_trace_cmd_test 00:06:02.803 ************************************ 00:06:02.803 14:24:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:02.803 14:24:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:02.803 14:24:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:02.803 14:24:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.803 14:24:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.803 14:24:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.803 14:24:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:02.803 "bdev": { 00:06:02.803 "mask": "0x8", 00:06:02.803 "tpoint_mask": "0xffffffffffffffff" 00:06:02.803 }, 00:06:02.803 "bdev_nvme": { 00:06:02.803 "mask": "0x4000", 00:06:02.803 "tpoint_mask": "0x0" 00:06:02.803 }, 00:06:02.803 "blobfs": { 00:06:02.803 "mask": "0x80", 00:06:02.803 "tpoint_mask": "0x0" 00:06:02.803 }, 00:06:02.803 "dsa": { 00:06:02.803 "mask": "0x200", 00:06:02.803 "tpoint_mask": "0x0" 00:06:02.803 }, 00:06:02.803 "ftl": { 00:06:02.803 "mask": "0x40", 00:06:02.803 "tpoint_mask": "0x0" 00:06:02.803 }, 00:06:02.803 "iaa": { 00:06:02.803 "mask": "0x1000", 00:06:02.803 "tpoint_mask": "0x0" 00:06:02.803 }, 00:06:02.803 "iscsi_conn": { 00:06:02.803 "mask": "0x2", 00:06:02.803 "tpoint_mask": "0x0" 00:06:02.803 }, 00:06:02.803 "nvme_pcie": { 00:06:02.803 "mask": "0x800", 00:06:02.803 "tpoint_mask": "0x0" 00:06:02.803 }, 00:06:02.803 "nvme_tcp": { 00:06:02.803 "mask": "0x2000", 00:06:02.803 "tpoint_mask": "0x0" 00:06:02.803 }, 00:06:02.803 "nvmf_rdma": { 00:06:02.803 "mask": "0x10", 00:06:02.803 "tpoint_mask": "0x0" 00:06:02.803 }, 00:06:02.803 "nvmf_tcp": { 00:06:02.803 "mask": "0x20", 00:06:02.803 "tpoint_mask": "0x0" 00:06:02.803 }, 00:06:02.803 "scsi": { 00:06:02.803 "mask": "0x4", 00:06:02.803 "tpoint_mask": "0x0" 00:06:02.803 }, 00:06:02.803 "sock": { 00:06:02.803 "mask": "0x8000", 00:06:02.803 "tpoint_mask": "0x0" 00:06:02.803 }, 00:06:02.803 "thread": { 00:06:02.803 "mask": "0x400", 00:06:02.803 "tpoint_mask": "0x0" 00:06:02.803 }, 00:06:02.803 "tpoint_group_mask": "0x8", 00:06:02.803 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid74464" 00:06:02.803 }' 00:06:02.803 14:24:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:02.803 14:24:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:02.803 14:24:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:02.803 14:24:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:02.803 14:24:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:02.803 14:24:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:02.803 14:24:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:03.062 14:24:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:03.062 14:24:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:03.062 14:24:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:03.062 00:06:03.062 real 0m0.307s 00:06:03.062 user 0m0.273s 00:06:03.062 sys 0m0.023s 00:06:03.062 14:24:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.062 14:24:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.062 ************************************ 00:06:03.062 END TEST rpc_trace_cmd_test 00:06:03.062 ************************************ 00:06:03.062 14:24:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:03.062 14:24:15 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:06:03.062 14:24:15 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:06:03.062 14:24:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.062 14:24:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.062 14:24:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.062 ************************************ 00:06:03.062 START TEST go_rpc 00:06:03.062 ************************************ 00:06:03.062 14:24:15 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:06:03.062 14:24:15 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:03.062 14:24:15 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:06:03.062 14:24:15 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:06:03.062 14:24:15 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:06:03.062 14:24:15 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:06:03.062 14:24:15 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.062 14:24:15 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.062 14:24:15 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.062 14:24:15 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:06:03.062 14:24:15 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:03.062 14:24:15 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["ede97285-c6c2-45c0-91b1-a0cfc38df48c"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"ede97285-c6c2-45c0-91b1-a0cfc38df48c","zoned":false}]' 00:06:03.062 14:24:15 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:06:03.062 14:24:15 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:06:03.062 14:24:15 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:03.062 14:24:15 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.062 14:24:15 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.321 14:24:15 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.321 14:24:15 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:03.321 14:24:15 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:06:03.321 14:24:15 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:06:03.321 14:24:15 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:06:03.321 00:06:03.321 real 0m0.223s 00:06:03.321 user 0m0.161s 00:06:03.321 sys 0m0.031s 00:06:03.321 14:24:15 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.321 ************************************ 00:06:03.321 END TEST go_rpc 00:06:03.321 14:24:15 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.321 ************************************ 00:06:03.321 14:24:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:03.321 14:24:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:03.321 14:24:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:03.321 14:24:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.321 14:24:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.321 14:24:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.321 ************************************ 00:06:03.321 START TEST rpc_daemon_integrity 00:06:03.321 ************************************ 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:03.321 { 00:06:03.321 "aliases": [ 00:06:03.321 "31c65c3c-fa64-4c1a-8c56-15f2a1f9d1c9" 00:06:03.321 ], 00:06:03.321 "assigned_rate_limits": { 00:06:03.321 "r_mbytes_per_sec": 0, 00:06:03.321 "rw_ios_per_sec": 0, 00:06:03.321 "rw_mbytes_per_sec": 0, 00:06:03.321 "w_mbytes_per_sec": 0 00:06:03.321 }, 00:06:03.321 "block_size": 512, 00:06:03.321 "claimed": false, 00:06:03.321 "driver_specific": {}, 00:06:03.321 "memory_domains": [ 00:06:03.321 { 00:06:03.321 "dma_device_id": "system", 00:06:03.321 "dma_device_type": 1 00:06:03.321 }, 00:06:03.321 { 00:06:03.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.321 "dma_device_type": 2 00:06:03.321 } 00:06:03.321 ], 00:06:03.321 "name": "Malloc3", 00:06:03.321 "num_blocks": 16384, 00:06:03.321 "product_name": "Malloc disk", 00:06:03.321 "supported_io_types": { 00:06:03.321 "abort": true, 00:06:03.321 "compare": false, 00:06:03.321 "compare_and_write": false, 00:06:03.321 "copy": true, 00:06:03.321 "flush": true, 00:06:03.321 "get_zone_info": false, 00:06:03.321 "nvme_admin": false, 00:06:03.321 "nvme_io": false, 00:06:03.321 "nvme_io_md": false, 00:06:03.321 "nvme_iov_md": false, 00:06:03.321 "read": true, 00:06:03.321 "reset": true, 00:06:03.321 "seek_data": false, 00:06:03.321 "seek_hole": false, 00:06:03.321 "unmap": true, 00:06:03.321 "write": true, 00:06:03.321 "write_zeroes": true, 00:06:03.321 "zcopy": true, 00:06:03.321 "zone_append": false, 00:06:03.321 "zone_management": false 00:06:03.321 }, 00:06:03.321 "uuid": "31c65c3c-fa64-4c1a-8c56-15f2a1f9d1c9", 00:06:03.321 "zoned": false 00:06:03.321 } 00:06:03.321 ]' 00:06:03.321 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.580 [2024-07-10 14:24:15.625233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:03.580 [2024-07-10 14:24:15.625338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:03.580 [2024-07-10 14:24:15.625375] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1051370 00:06:03.580 [2024-07-10 14:24:15.625399] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:03.580 [2024-07-10 14:24:15.627244] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:03.580 [2024-07-10 14:24:15.627322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:03.580 Passthru0 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:03.580 { 00:06:03.580 "aliases": [ 00:06:03.580 "31c65c3c-fa64-4c1a-8c56-15f2a1f9d1c9" 00:06:03.580 ], 00:06:03.580 "assigned_rate_limits": { 00:06:03.580 "r_mbytes_per_sec": 0, 00:06:03.580 "rw_ios_per_sec": 0, 00:06:03.580 "rw_mbytes_per_sec": 0, 00:06:03.580 "w_mbytes_per_sec": 0 00:06:03.580 }, 00:06:03.580 "block_size": 512, 00:06:03.580 "claim_type": "exclusive_write", 00:06:03.580 "claimed": true, 00:06:03.580 "driver_specific": {}, 00:06:03.580 "memory_domains": [ 00:06:03.580 { 00:06:03.580 "dma_device_id": "system", 00:06:03.580 "dma_device_type": 1 00:06:03.580 }, 00:06:03.580 { 00:06:03.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.580 "dma_device_type": 2 00:06:03.580 } 00:06:03.580 ], 00:06:03.580 "name": "Malloc3", 00:06:03.580 "num_blocks": 16384, 00:06:03.580 "product_name": "Malloc disk", 00:06:03.580 "supported_io_types": { 00:06:03.580 "abort": true, 00:06:03.580 "compare": false, 00:06:03.580 "compare_and_write": false, 00:06:03.580 "copy": true, 00:06:03.580 "flush": true, 00:06:03.580 "get_zone_info": false, 00:06:03.580 "nvme_admin": false, 00:06:03.580 "nvme_io": false, 00:06:03.580 "nvme_io_md": false, 00:06:03.580 "nvme_iov_md": false, 00:06:03.580 "read": true, 00:06:03.580 "reset": true, 00:06:03.580 "seek_data": false, 00:06:03.580 "seek_hole": false, 00:06:03.580 "unmap": true, 00:06:03.580 "write": true, 00:06:03.580 "write_zeroes": true, 00:06:03.580 "zcopy": true, 00:06:03.580 "zone_append": false, 00:06:03.580 "zone_management": false 00:06:03.580 }, 00:06:03.580 "uuid": "31c65c3c-fa64-4c1a-8c56-15f2a1f9d1c9", 00:06:03.580 "zoned": false 00:06:03.580 }, 00:06:03.580 { 00:06:03.580 "aliases": [ 00:06:03.580 "69803c43-9a99-5eec-95de-6eb90672fa8b" 00:06:03.580 ], 00:06:03.580 "assigned_rate_limits": { 00:06:03.580 "r_mbytes_per_sec": 0, 00:06:03.580 "rw_ios_per_sec": 0, 00:06:03.580 "rw_mbytes_per_sec": 0, 00:06:03.580 "w_mbytes_per_sec": 0 00:06:03.580 }, 00:06:03.580 "block_size": 512, 00:06:03.580 "claimed": false, 00:06:03.580 "driver_specific": { 00:06:03.580 "passthru": { 00:06:03.580 "base_bdev_name": "Malloc3", 00:06:03.580 "name": "Passthru0" 00:06:03.580 } 00:06:03.580 }, 00:06:03.580 "memory_domains": [ 00:06:03.580 { 00:06:03.580 "dma_device_id": "system", 00:06:03.580 "dma_device_type": 1 00:06:03.580 }, 00:06:03.580 { 00:06:03.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.580 "dma_device_type": 2 00:06:03.580 } 00:06:03.580 ], 00:06:03.580 "name": "Passthru0", 00:06:03.580 "num_blocks": 16384, 00:06:03.580 "product_name": "passthru", 00:06:03.580 "supported_io_types": { 00:06:03.580 "abort": true, 00:06:03.580 "compare": false, 00:06:03.580 "compare_and_write": false, 00:06:03.580 "copy": true, 00:06:03.580 "flush": true, 00:06:03.580 "get_zone_info": false, 00:06:03.580 "nvme_admin": false, 00:06:03.580 "nvme_io": false, 00:06:03.580 "nvme_io_md": false, 00:06:03.580 "nvme_iov_md": false, 00:06:03.580 "read": true, 00:06:03.580 "reset": true, 00:06:03.580 "seek_data": false, 00:06:03.580 "seek_hole": false, 00:06:03.580 "unmap": true, 00:06:03.580 "write": true, 00:06:03.580 "write_zeroes": true, 00:06:03.580 "zcopy": true, 00:06:03.580 "zone_append": false, 00:06:03.580 "zone_management": false 00:06:03.580 }, 00:06:03.580 "uuid": "69803c43-9a99-5eec-95de-6eb90672fa8b", 00:06:03.580 "zoned": false 00:06:03.580 } 00:06:03.580 ]' 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:03.580 00:06:03.580 real 0m0.335s 00:06:03.580 user 0m0.234s 00:06:03.580 sys 0m0.038s 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.580 14:24:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.580 ************************************ 00:06:03.580 END TEST rpc_daemon_integrity 00:06:03.580 ************************************ 00:06:03.580 14:24:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:03.580 14:24:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:03.580 14:24:15 rpc -- rpc/rpc.sh@84 -- # killprocess 74464 00:06:03.580 14:24:15 rpc -- common/autotest_common.sh@948 -- # '[' -z 74464 ']' 00:06:03.580 14:24:15 rpc -- common/autotest_common.sh@952 -- # kill -0 74464 00:06:03.580 14:24:15 rpc -- common/autotest_common.sh@953 -- # uname 00:06:03.580 14:24:15 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.580 14:24:15 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74464 00:06:03.838 14:24:15 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.838 killing process with pid 74464 00:06:03.838 14:24:15 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.838 14:24:15 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74464' 00:06:03.838 14:24:15 rpc -- common/autotest_common.sh@967 -- # kill 74464 00:06:03.838 14:24:15 rpc -- common/autotest_common.sh@972 -- # wait 74464 00:06:03.838 00:06:03.838 real 0m2.266s 00:06:03.838 user 0m3.202s 00:06:03.838 sys 0m0.543s 00:06:03.838 14:24:16 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.838 14:24:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.838 ************************************ 00:06:03.838 END TEST rpc 00:06:03.838 ************************************ 00:06:04.096 14:24:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:04.096 14:24:16 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:04.096 14:24:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.096 14:24:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.096 14:24:16 -- common/autotest_common.sh@10 -- # set +x 00:06:04.096 ************************************ 00:06:04.096 START TEST skip_rpc 00:06:04.096 ************************************ 00:06:04.096 14:24:16 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:04.096 * Looking for test storage... 00:06:04.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:04.096 14:24:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:04.096 14:24:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:04.096 14:24:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:04.096 14:24:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.096 14:24:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.096 14:24:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.096 ************************************ 00:06:04.096 START TEST skip_rpc 00:06:04.096 ************************************ 00:06:04.096 14:24:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:04.096 14:24:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=74706 00:06:04.096 14:24:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.096 14:24:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:04.096 14:24:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:04.096 [2024-07-10 14:24:16.286796] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:04.096 [2024-07-10 14:24:16.286921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74706 ] 00:06:04.354 [2024-07-10 14:24:16.407406] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.354 [2024-07-10 14:24:16.424940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.354 [2024-07-10 14:24:16.461248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.626 2024/07/10 14:24:21 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 74706 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 74706 ']' 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 74706 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74706 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.626 killing process with pid 74706 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74706' 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 74706 00:06:09.626 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 74706 00:06:09.626 00:06:09.626 real 0m5.268s 00:06:09.626 user 0m5.005s 00:06:09.627 sys 0m0.169s 00:06:09.627 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.627 14:24:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.627 ************************************ 00:06:09.627 END TEST skip_rpc 00:06:09.627 ************************************ 00:06:09.627 14:24:21 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:09.627 14:24:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:09.627 14:24:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.627 14:24:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.627 14:24:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.627 ************************************ 00:06:09.627 START TEST skip_rpc_with_json 00:06:09.627 ************************************ 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=74801 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 74801 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 74801 ']' 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.627 [2024-07-10 14:24:21.582928] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:09.627 [2024-07-10 14:24:21.583033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74801 ] 00:06:09.627 [2024-07-10 14:24:21.700637] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.627 [2024-07-10 14:24:21.720808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.627 [2024-07-10 14:24:21.757129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.627 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.888 [2024-07-10 14:24:21.915122] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:09.888 2024/07/10 14:24:21 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:06:09.888 request: 00:06:09.888 { 00:06:09.888 "method": "nvmf_get_transports", 00:06:09.888 "params": { 00:06:09.888 "trtype": "tcp" 00:06:09.888 } 00:06:09.888 } 00:06:09.888 Got JSON-RPC error response 00:06:09.888 GoRPCClient: error on JSON-RPC call 00:06:09.888 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:09.888 14:24:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:09.888 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.888 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.888 [2024-07-10 14:24:21.927254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.888 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.889 14:24:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:09.889 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.889 14:24:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.889 14:24:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.889 14:24:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:09.889 { 00:06:09.889 "subsystems": [ 00:06:09.889 { 00:06:09.889 "subsystem": "keyring", 00:06:09.889 "config": [] 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "subsystem": "iobuf", 00:06:09.889 "config": [ 00:06:09.889 { 00:06:09.889 "method": "iobuf_set_options", 00:06:09.889 "params": { 00:06:09.889 "large_bufsize": 135168, 00:06:09.889 "large_pool_count": 1024, 00:06:09.889 "small_bufsize": 8192, 00:06:09.889 "small_pool_count": 8192 00:06:09.889 } 00:06:09.889 } 00:06:09.889 ] 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "subsystem": "sock", 00:06:09.889 "config": [ 00:06:09.889 { 00:06:09.889 "method": "sock_set_default_impl", 00:06:09.889 "params": { 00:06:09.889 "impl_name": "posix" 00:06:09.889 } 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "method": "sock_impl_set_options", 00:06:09.889 "params": { 00:06:09.889 "enable_ktls": false, 00:06:09.889 "enable_placement_id": 0, 00:06:09.889 "enable_quickack": false, 00:06:09.889 "enable_recv_pipe": true, 00:06:09.889 "enable_zerocopy_send_client": false, 00:06:09.889 "enable_zerocopy_send_server": true, 00:06:09.889 "impl_name": "ssl", 00:06:09.889 "recv_buf_size": 4096, 00:06:09.889 "send_buf_size": 4096, 00:06:09.889 "tls_version": 0, 00:06:09.889 "zerocopy_threshold": 0 00:06:09.889 } 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "method": "sock_impl_set_options", 00:06:09.889 "params": { 00:06:09.889 "enable_ktls": false, 00:06:09.889 "enable_placement_id": 0, 00:06:09.889 "enable_quickack": false, 00:06:09.889 "enable_recv_pipe": true, 00:06:09.889 "enable_zerocopy_send_client": false, 00:06:09.889 "enable_zerocopy_send_server": true, 00:06:09.889 "impl_name": "posix", 00:06:09.889 "recv_buf_size": 2097152, 00:06:09.889 "send_buf_size": 2097152, 00:06:09.889 "tls_version": 0, 00:06:09.889 "zerocopy_threshold": 0 00:06:09.889 } 00:06:09.889 } 00:06:09.889 ] 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "subsystem": "vmd", 00:06:09.889 "config": [] 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "subsystem": "accel", 00:06:09.889 "config": [ 00:06:09.889 { 00:06:09.889 "method": "accel_set_options", 00:06:09.889 "params": { 00:06:09.889 "buf_count": 2048, 00:06:09.889 "large_cache_size": 16, 00:06:09.889 "sequence_count": 2048, 00:06:09.889 "small_cache_size": 128, 00:06:09.889 "task_count": 2048 00:06:09.889 } 00:06:09.889 } 00:06:09.889 ] 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "subsystem": "bdev", 00:06:09.889 "config": [ 00:06:09.889 { 00:06:09.889 "method": "bdev_set_options", 00:06:09.889 "params": { 00:06:09.889 "bdev_auto_examine": true, 00:06:09.889 "bdev_io_cache_size": 256, 00:06:09.889 "bdev_io_pool_size": 65535, 00:06:09.889 "iobuf_large_cache_size": 16, 00:06:09.889 "iobuf_small_cache_size": 128 00:06:09.889 } 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "method": "bdev_raid_set_options", 00:06:09.889 "params": { 00:06:09.889 "process_window_size_kb": 1024 00:06:09.889 } 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "method": "bdev_iscsi_set_options", 00:06:09.889 "params": { 00:06:09.889 "timeout_sec": 30 00:06:09.889 } 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "method": "bdev_nvme_set_options", 00:06:09.889 "params": { 00:06:09.889 "action_on_timeout": "none", 00:06:09.889 "allow_accel_sequence": false, 00:06:09.889 "arbitration_burst": 0, 00:06:09.889 "bdev_retry_count": 3, 00:06:09.889 "ctrlr_loss_timeout_sec": 0, 00:06:09.889 "delay_cmd_submit": true, 00:06:09.889 "dhchap_dhgroups": [ 00:06:09.889 "null", 00:06:09.889 "ffdhe2048", 00:06:09.889 "ffdhe3072", 00:06:09.889 "ffdhe4096", 00:06:09.889 "ffdhe6144", 00:06:09.889 "ffdhe8192" 00:06:09.889 ], 00:06:09.889 "dhchap_digests": [ 00:06:09.889 "sha256", 00:06:09.889 "sha384", 00:06:09.889 "sha512" 00:06:09.889 ], 00:06:09.889 "disable_auto_failback": false, 00:06:09.889 "fast_io_fail_timeout_sec": 0, 00:06:09.889 "generate_uuids": false, 00:06:09.889 "high_priority_weight": 0, 00:06:09.889 "io_path_stat": false, 00:06:09.889 "io_queue_requests": 0, 00:06:09.889 "keep_alive_timeout_ms": 10000, 00:06:09.889 "low_priority_weight": 0, 00:06:09.889 "medium_priority_weight": 0, 00:06:09.889 "nvme_adminq_poll_period_us": 10000, 00:06:09.889 "nvme_error_stat": false, 00:06:09.889 "nvme_ioq_poll_period_us": 0, 00:06:09.889 "rdma_cm_event_timeout_ms": 0, 00:06:09.889 "rdma_max_cq_size": 0, 00:06:09.889 "rdma_srq_size": 0, 00:06:09.889 "reconnect_delay_sec": 0, 00:06:09.889 "timeout_admin_us": 0, 00:06:09.889 "timeout_us": 0, 00:06:09.889 "transport_ack_timeout": 0, 00:06:09.889 "transport_retry_count": 4, 00:06:09.889 "transport_tos": 0 00:06:09.889 } 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "method": "bdev_nvme_set_hotplug", 00:06:09.889 "params": { 00:06:09.889 "enable": false, 00:06:09.889 "period_us": 100000 00:06:09.889 } 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "method": "bdev_wait_for_examine" 00:06:09.889 } 00:06:09.889 ] 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "subsystem": "scsi", 00:06:09.889 "config": null 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "subsystem": "scheduler", 00:06:09.889 "config": [ 00:06:09.889 { 00:06:09.889 "method": "framework_set_scheduler", 00:06:09.889 "params": { 00:06:09.889 "name": "static" 00:06:09.889 } 00:06:09.889 } 00:06:09.889 ] 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "subsystem": "vhost_scsi", 00:06:09.889 "config": [] 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "subsystem": "vhost_blk", 00:06:09.889 "config": [] 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "subsystem": "ublk", 00:06:09.889 "config": [] 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "subsystem": "nbd", 00:06:09.889 "config": [] 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "subsystem": "nvmf", 00:06:09.889 "config": [ 00:06:09.889 { 00:06:09.889 "method": "nvmf_set_config", 00:06:09.889 "params": { 00:06:09.889 "admin_cmd_passthru": { 00:06:09.889 "identify_ctrlr": false 00:06:09.889 }, 00:06:09.889 "discovery_filter": "match_any" 00:06:09.889 } 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "method": "nvmf_set_max_subsystems", 00:06:09.889 "params": { 00:06:09.889 "max_subsystems": 1024 00:06:09.889 } 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "method": "nvmf_set_crdt", 00:06:09.889 "params": { 00:06:09.889 "crdt1": 0, 00:06:09.889 "crdt2": 0, 00:06:09.889 "crdt3": 0 00:06:09.889 } 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "method": "nvmf_create_transport", 00:06:09.889 "params": { 00:06:09.889 "abort_timeout_sec": 1, 00:06:09.889 "ack_timeout": 0, 00:06:09.889 "buf_cache_size": 4294967295, 00:06:09.889 "c2h_success": true, 00:06:09.889 "data_wr_pool_size": 0, 00:06:09.889 "dif_insert_or_strip": false, 00:06:09.889 "in_capsule_data_size": 4096, 00:06:09.889 "io_unit_size": 131072, 00:06:09.889 "max_aq_depth": 128, 00:06:09.889 "max_io_qpairs_per_ctrlr": 127, 00:06:09.889 "max_io_size": 131072, 00:06:09.889 "max_queue_depth": 128, 00:06:09.889 "num_shared_buffers": 511, 00:06:09.889 "sock_priority": 0, 00:06:09.889 "trtype": "TCP", 00:06:09.889 "zcopy": false 00:06:09.889 } 00:06:09.889 } 00:06:09.889 ] 00:06:09.889 }, 00:06:09.889 { 00:06:09.889 "subsystem": "iscsi", 00:06:09.889 "config": [ 00:06:09.889 { 00:06:09.889 "method": "iscsi_set_options", 00:06:09.889 "params": { 00:06:09.889 "allow_duplicated_isid": false, 00:06:09.889 "chap_group": 0, 00:06:09.889 "data_out_pool_size": 2048, 00:06:09.889 "default_time2retain": 20, 00:06:09.889 "default_time2wait": 2, 00:06:09.889 "disable_chap": false, 00:06:09.889 "error_recovery_level": 0, 00:06:09.889 "first_burst_length": 8192, 00:06:09.889 "immediate_data": true, 00:06:09.889 "immediate_data_pool_size": 16384, 00:06:09.889 "max_connections_per_session": 2, 00:06:09.889 "max_large_datain_per_connection": 64, 00:06:09.889 "max_queue_depth": 64, 00:06:09.889 "max_r2t_per_connection": 4, 00:06:09.889 "max_sessions": 128, 00:06:09.889 "mutual_chap": false, 00:06:09.889 "node_base": "iqn.2016-06.io.spdk", 00:06:09.889 "nop_in_interval": 30, 00:06:09.889 "nop_timeout": 60, 00:06:09.889 "pdu_pool_size": 36864, 00:06:09.889 "require_chap": false 00:06:09.889 } 00:06:09.889 } 00:06:09.889 ] 00:06:09.889 } 00:06:09.889 ] 00:06:09.889 } 00:06:09.889 14:24:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:09.889 14:24:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 74801 00:06:09.889 14:24:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 74801 ']' 00:06:09.889 14:24:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 74801 00:06:09.889 14:24:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:09.889 14:24:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.889 14:24:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74801 00:06:09.889 14:24:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.889 killing process with pid 74801 00:06:09.889 14:24:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.890 14:24:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74801' 00:06:09.890 14:24:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 74801 00:06:09.890 14:24:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 74801 00:06:10.167 14:24:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:10.167 14:24:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=74821 00:06:10.167 14:24:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 74821 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 74821 ']' 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 74821 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74821 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.433 killing process with pid 74821 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74821' 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 74821 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 74821 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:15.433 00:06:15.433 real 0m6.095s 00:06:15.433 user 0m5.825s 00:06:15.433 sys 0m0.413s 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.433 ************************************ 00:06:15.433 END TEST skip_rpc_with_json 00:06:15.433 ************************************ 00:06:15.433 14:24:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:15.433 14:24:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:15.433 14:24:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.433 14:24:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.433 14:24:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.433 ************************************ 00:06:15.433 START TEST skip_rpc_with_delay 00:06:15.433 ************************************ 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:15.433 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:15.704 [2024-07-10 14:24:27.723668] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:15.704 [2024-07-10 14:24:27.723799] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:15.704 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:15.704 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:15.704 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:15.704 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:15.704 00:06:15.704 real 0m0.077s 00:06:15.704 user 0m0.054s 00:06:15.704 sys 0m0.022s 00:06:15.704 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.704 ************************************ 00:06:15.704 END TEST skip_rpc_with_delay 00:06:15.704 ************************************ 00:06:15.704 14:24:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:15.704 14:24:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:15.704 14:24:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:15.704 14:24:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:15.704 14:24:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:15.704 14:24:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.704 14:24:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.704 14:24:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.704 ************************************ 00:06:15.704 START TEST exit_on_failed_rpc_init 00:06:15.704 ************************************ 00:06:15.704 14:24:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:15.704 14:24:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=74938 00:06:15.704 14:24:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.704 14:24:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 74938 00:06:15.704 14:24:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 74938 ']' 00:06:15.704 14:24:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.704 14:24:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.704 14:24:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.704 14:24:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.705 14:24:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:15.705 [2024-07-10 14:24:27.840773] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:15.705 [2024-07-10 14:24:27.840869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74938 ] 00:06:15.705 [2024-07-10 14:24:27.959029] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.705 [2024-07-10 14:24:27.974890] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.962 [2024-07-10 14:24:28.011302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:15.962 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.962 [2024-07-10 14:24:28.217220] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:15.962 [2024-07-10 14:24:28.217341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74949 ] 00:06:16.221 [2024-07-10 14:24:28.334944] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.221 [2024-07-10 14:24:28.352459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.221 [2024-07-10 14:24:28.389412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.221 [2024-07-10 14:24:28.389509] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:16.221 [2024-07-10 14:24:28.389525] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:16.221 [2024-07-10 14:24:28.389533] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 74938 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 74938 ']' 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 74938 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74938 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.221 killing process with pid 74938 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74938' 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 74938 00:06:16.221 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 74938 00:06:16.479 00:06:16.479 real 0m0.917s 00:06:16.479 user 0m1.041s 00:06:16.479 sys 0m0.245s 00:06:16.479 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.479 14:24:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:16.479 ************************************ 00:06:16.479 END TEST exit_on_failed_rpc_init 00:06:16.479 ************************************ 00:06:16.479 14:24:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:16.479 14:24:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:16.479 00:06:16.479 real 0m12.605s 00:06:16.479 user 0m12.009s 00:06:16.479 sys 0m1.002s 00:06:16.479 14:24:28 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.479 14:24:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.479 ************************************ 00:06:16.479 END TEST skip_rpc 00:06:16.479 ************************************ 00:06:16.738 14:24:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.738 14:24:28 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:16.738 14:24:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.738 14:24:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.738 14:24:28 -- common/autotest_common.sh@10 -- # set +x 00:06:16.738 ************************************ 00:06:16.738 START TEST rpc_client 00:06:16.738 ************************************ 00:06:16.738 14:24:28 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:16.738 * Looking for test storage... 00:06:16.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:16.738 14:24:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:16.738 OK 00:06:16.738 14:24:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:16.738 00:06:16.738 real 0m0.098s 00:06:16.738 user 0m0.045s 00:06:16.738 sys 0m0.057s 00:06:16.738 14:24:28 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.738 14:24:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:16.738 ************************************ 00:06:16.738 END TEST rpc_client 00:06:16.738 ************************************ 00:06:16.739 14:24:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.739 14:24:28 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:16.739 14:24:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.739 14:24:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.739 14:24:28 -- common/autotest_common.sh@10 -- # set +x 00:06:16.739 ************************************ 00:06:16.739 START TEST json_config 00:06:16.739 ************************************ 00:06:16.739 14:24:28 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:16.739 14:24:28 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:16.739 14:24:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:16.739 14:24:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.739 14:24:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.739 14:24:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.739 14:24:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.739 14:24:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.739 14:24:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.739 14:24:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.739 14:24:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.739 14:24:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.739 14:24:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:16.739 14:24:29 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.739 14:24:29 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.739 14:24:29 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.739 14:24:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.739 14:24:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.739 14:24:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.739 14:24:29 json_config -- paths/export.sh@5 -- # export PATH 00:06:16.739 14:24:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@47 -- # : 0 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.739 14:24:29 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:16.739 INFO: JSON configuration test init 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:16.739 14:24:29 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:16.739 14:24:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.739 14:24:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.997 14:24:29 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:16.997 14:24:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.997 14:24:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.997 14:24:29 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:16.997 14:24:29 json_config -- json_config/common.sh@9 -- # local app=target 00:06:16.997 14:24:29 json_config -- json_config/common.sh@10 -- # shift 00:06:16.997 14:24:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:16.997 14:24:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:16.997 14:24:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:16.997 14:24:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.997 14:24:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.997 14:24:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=75067 00:06:16.997 14:24:29 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:16.997 Waiting for target to run... 00:06:16.997 14:24:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:16.997 14:24:29 json_config -- json_config/common.sh@25 -- # waitforlisten 75067 /var/tmp/spdk_tgt.sock 00:06:16.997 14:24:29 json_config -- common/autotest_common.sh@829 -- # '[' -z 75067 ']' 00:06:16.997 14:24:29 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:16.997 14:24:29 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:16.997 14:24:29 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:16.997 14:24:29 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.997 14:24:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.997 [2024-07-10 14:24:29.091080] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:16.997 [2024-07-10 14:24:29.091213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75067 ] 00:06:17.255 [2024-07-10 14:24:29.372549] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.255 [2024-07-10 14:24:29.391046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.255 [2024-07-10 14:24:29.418669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.822 14:24:30 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.822 14:24:30 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:17.822 00:06:17.822 14:24:30 json_config -- json_config/common.sh@26 -- # echo '' 00:06:17.822 14:24:30 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:17.822 14:24:30 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:17.822 14:24:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:17.822 14:24:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.822 14:24:30 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:17.822 14:24:30 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:17.822 14:24:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.822 14:24:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.822 14:24:30 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:17.822 14:24:30 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:17.822 14:24:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:18.389 14:24:30 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:18.389 14:24:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:18.389 14:24:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:18.389 14:24:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.389 14:24:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:18.389 14:24:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:18.389 14:24:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:18.389 14:24:30 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:18.389 14:24:30 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:18.389 14:24:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:18.648 14:24:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.648 14:24:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:18.648 14:24:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:18.648 14:24:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:18.648 14:24:30 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:18.648 14:24:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:19.216 MallocForNvmf0 00:06:19.216 14:24:31 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:19.216 14:24:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:19.474 MallocForNvmf1 00:06:19.474 14:24:31 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:19.474 14:24:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:19.474 [2024-07-10 14:24:31.761758] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.733 14:24:31 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:19.733 14:24:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:19.991 14:24:32 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:19.991 14:24:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:19.992 14:24:32 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:19.992 14:24:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:20.560 14:24:32 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:20.560 14:24:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:20.560 [2024-07-10 14:24:32.822396] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:20.560 14:24:32 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:20.560 14:24:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:20.560 14:24:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.821 14:24:32 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:20.821 14:24:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:20.821 14:24:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.821 14:24:32 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:20.821 14:24:32 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:20.821 14:24:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:21.078 MallocBdevForConfigChangeCheck 00:06:21.078 14:24:33 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:21.078 14:24:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.078 14:24:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.078 14:24:33 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:21.078 14:24:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.646 INFO: shutting down applications... 00:06:21.646 14:24:33 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:21.646 14:24:33 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:21.646 14:24:33 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:21.646 14:24:33 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:21.646 14:24:33 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:21.905 Calling clear_iscsi_subsystem 00:06:21.905 Calling clear_nvmf_subsystem 00:06:21.905 Calling clear_nbd_subsystem 00:06:21.905 Calling clear_ublk_subsystem 00:06:21.905 Calling clear_vhost_blk_subsystem 00:06:21.905 Calling clear_vhost_scsi_subsystem 00:06:21.905 Calling clear_bdev_subsystem 00:06:21.905 14:24:33 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:21.905 14:24:33 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:21.905 14:24:33 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:21.905 14:24:33 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.905 14:24:33 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:21.905 14:24:33 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:22.164 14:24:34 json_config -- json_config/json_config.sh@345 -- # break 00:06:22.164 14:24:34 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:22.164 14:24:34 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:22.164 14:24:34 json_config -- json_config/common.sh@31 -- # local app=target 00:06:22.164 14:24:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:22.164 14:24:34 json_config -- json_config/common.sh@35 -- # [[ -n 75067 ]] 00:06:22.164 14:24:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 75067 00:06:22.164 14:24:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:22.164 14:24:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.164 14:24:34 json_config -- json_config/common.sh@41 -- # kill -0 75067 00:06:22.164 14:24:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.732 14:24:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.732 14:24:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.732 14:24:34 json_config -- json_config/common.sh@41 -- # kill -0 75067 00:06:22.732 14:24:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:22.732 14:24:34 json_config -- json_config/common.sh@43 -- # break 00:06:22.732 14:24:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:22.732 SPDK target shutdown done 00:06:22.732 14:24:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:22.732 INFO: relaunching applications... 00:06:22.732 14:24:34 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:22.732 14:24:34 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:22.732 14:24:34 json_config -- json_config/common.sh@9 -- # local app=target 00:06:22.732 14:24:34 json_config -- json_config/common.sh@10 -- # shift 00:06:22.732 14:24:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:22.732 14:24:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:22.732 14:24:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:22.732 14:24:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.732 14:24:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.732 14:24:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=75344 00:06:22.732 Waiting for target to run... 00:06:22.732 14:24:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:22.732 14:24:34 json_config -- json_config/common.sh@25 -- # waitforlisten 75344 /var/tmp/spdk_tgt.sock 00:06:22.732 14:24:34 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:22.732 14:24:34 json_config -- common/autotest_common.sh@829 -- # '[' -z 75344 ']' 00:06:22.732 14:24:34 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:22.732 14:24:34 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:22.732 14:24:34 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:22.732 14:24:34 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.732 14:24:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.732 [2024-07-10 14:24:34.980300] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:22.732 [2024-07-10 14:24:34.980391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75344 ] 00:06:22.991 [2024-07-10 14:24:35.258116] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.991 [2024-07-10 14:24:35.275001] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.250 [2024-07-10 14:24:35.299339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.508 [2024-07-10 14:24:35.600567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.508 [2024-07-10 14:24:35.632720] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:23.767 14:24:36 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.767 14:24:36 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:23.767 00:06:23.767 14:24:36 json_config -- json_config/common.sh@26 -- # echo '' 00:06:23.767 14:24:36 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:23.767 INFO: Checking if target configuration is the same... 00:06:23.767 14:24:36 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:23.767 14:24:36 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:23.767 14:24:36 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:23.767 14:24:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:23.767 + '[' 2 -ne 2 ']' 00:06:23.767 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:23.767 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:23.767 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:23.767 +++ basename /dev/fd/62 00:06:23.767 ++ mktemp /tmp/62.XXX 00:06:23.767 + tmp_file_1=/tmp/62.ipN 00:06:23.767 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:23.767 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:23.767 + tmp_file_2=/tmp/spdk_tgt_config.json.vMs 00:06:23.767 + ret=0 00:06:23.767 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:24.333 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:24.333 + diff -u /tmp/62.ipN /tmp/spdk_tgt_config.json.vMs 00:06:24.333 INFO: JSON config files are the same 00:06:24.333 + echo 'INFO: JSON config files are the same' 00:06:24.333 + rm /tmp/62.ipN /tmp/spdk_tgt_config.json.vMs 00:06:24.333 + exit 0 00:06:24.333 14:24:36 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:24.333 INFO: changing configuration and checking if this can be detected... 00:06:24.333 14:24:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:24.333 14:24:36 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:24.333 14:24:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:24.592 14:24:36 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:24.592 14:24:36 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:24.592 14:24:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.592 + '[' 2 -ne 2 ']' 00:06:24.592 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:24.592 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:24.592 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:24.592 +++ basename /dev/fd/62 00:06:24.592 ++ mktemp /tmp/62.XXX 00:06:24.592 + tmp_file_1=/tmp/62.Fdc 00:06:24.592 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:24.592 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:24.592 + tmp_file_2=/tmp/spdk_tgt_config.json.jMb 00:06:24.592 + ret=0 00:06:24.592 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:25.157 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:25.157 + diff -u /tmp/62.Fdc /tmp/spdk_tgt_config.json.jMb 00:06:25.157 + ret=1 00:06:25.157 + echo '=== Start of file: /tmp/62.Fdc ===' 00:06:25.157 + cat /tmp/62.Fdc 00:06:25.157 + echo '=== End of file: /tmp/62.Fdc ===' 00:06:25.157 + echo '' 00:06:25.157 + echo '=== Start of file: /tmp/spdk_tgt_config.json.jMb ===' 00:06:25.157 + cat /tmp/spdk_tgt_config.json.jMb 00:06:25.157 + echo '=== End of file: /tmp/spdk_tgt_config.json.jMb ===' 00:06:25.157 + echo '' 00:06:25.157 + rm /tmp/62.Fdc /tmp/spdk_tgt_config.json.jMb 00:06:25.157 + exit 1 00:06:25.157 INFO: configuration change detected. 00:06:25.157 14:24:37 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:25.157 14:24:37 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:25.157 14:24:37 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:25.157 14:24:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:25.157 14:24:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.158 14:24:37 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:25.158 14:24:37 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:25.158 14:24:37 json_config -- json_config/json_config.sh@317 -- # [[ -n 75344 ]] 00:06:25.158 14:24:37 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:25.158 14:24:37 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.158 14:24:37 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:25.158 14:24:37 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:25.158 14:24:37 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:25.158 14:24:37 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:25.158 14:24:37 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:25.158 14:24:37 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.158 14:24:37 json_config -- json_config/json_config.sh@323 -- # killprocess 75344 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@948 -- # '[' -z 75344 ']' 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@952 -- # kill -0 75344 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@953 -- # uname 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75344 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.158 killing process with pid 75344 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75344' 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@967 -- # kill 75344 00:06:25.158 14:24:37 json_config -- common/autotest_common.sh@972 -- # wait 75344 00:06:25.416 14:24:37 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:25.416 14:24:37 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:25.416 14:24:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:25.416 14:24:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.416 14:24:37 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:25.416 INFO: Success 00:06:25.416 14:24:37 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:25.416 00:06:25.416 real 0m8.588s 00:06:25.416 user 0m12.709s 00:06:25.416 sys 0m1.558s 00:06:25.416 14:24:37 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.416 14:24:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.416 ************************************ 00:06:25.416 END TEST json_config 00:06:25.416 ************************************ 00:06:25.416 14:24:37 -- common/autotest_common.sh@1142 -- # return 0 00:06:25.416 14:24:37 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:25.416 14:24:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.416 14:24:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.416 14:24:37 -- common/autotest_common.sh@10 -- # set +x 00:06:25.416 ************************************ 00:06:25.416 START TEST json_config_extra_key 00:06:25.416 ************************************ 00:06:25.416 14:24:37 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:25.416 14:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.416 14:24:37 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.416 14:24:37 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.416 14:24:37 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.416 14:24:37 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.416 14:24:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.416 14:24:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.417 14:24:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.417 14:24:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:25.417 14:24:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.417 14:24:37 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:25.417 14:24:37 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.417 14:24:37 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.417 14:24:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.417 14:24:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.417 14:24:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.417 14:24:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.417 14:24:37 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.417 14:24:37 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.417 14:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:25.417 14:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:25.417 14:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:25.417 14:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:25.417 14:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:25.417 14:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:25.417 14:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:25.417 14:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:25.417 14:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:25.417 14:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:25.417 INFO: launching applications... 00:06:25.417 14:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:25.417 14:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:25.417 14:24:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:25.417 14:24:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:25.417 14:24:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.417 14:24:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.417 14:24:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.417 14:24:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.417 14:24:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.417 14:24:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=75514 00:06:25.417 Waiting for target to run... 00:06:25.417 14:24:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.417 14:24:37 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:25.417 14:24:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 75514 /var/tmp/spdk_tgt.sock 00:06:25.417 14:24:37 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 75514 ']' 00:06:25.417 14:24:37 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.417 14:24:37 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.417 14:24:37 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.417 14:24:37 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.417 14:24:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:25.704 [2024-07-10 14:24:37.708801] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:25.704 [2024-07-10 14:24:37.708876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75514 ] 00:06:25.982 [2024-07-10 14:24:37.973850] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.982 [2024-07-10 14:24:37.991665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.982 [2024-07-10 14:24:38.015467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.548 14:24:38 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.548 00:06:26.548 INFO: shutting down applications... 00:06:26.548 14:24:38 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:26.548 14:24:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:26.548 14:24:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:26.548 14:24:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:26.548 14:24:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:26.548 14:24:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:26.548 14:24:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 75514 ]] 00:06:26.549 14:24:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 75514 00:06:26.549 14:24:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:26.549 14:24:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.549 14:24:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 75514 00:06:26.549 14:24:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.116 14:24:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.116 14:24:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.116 14:24:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 75514 00:06:27.116 14:24:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:27.116 14:24:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:27.116 14:24:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:27.116 SPDK target shutdown done 00:06:27.116 14:24:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:27.116 Success 00:06:27.116 14:24:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:27.116 00:06:27.116 real 0m1.628s 00:06:27.116 user 0m1.513s 00:06:27.116 sys 0m0.273s 00:06:27.116 14:24:39 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.116 ************************************ 00:06:27.116 END TEST json_config_extra_key 00:06:27.116 ************************************ 00:06:27.116 14:24:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:27.116 14:24:39 -- common/autotest_common.sh@1142 -- # return 0 00:06:27.116 14:24:39 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.116 14:24:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.116 14:24:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.116 14:24:39 -- common/autotest_common.sh@10 -- # set +x 00:06:27.116 ************************************ 00:06:27.116 START TEST alias_rpc 00:06:27.116 ************************************ 00:06:27.116 14:24:39 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.116 * Looking for test storage... 00:06:27.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:27.116 14:24:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.116 14:24:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=75596 00:06:27.116 14:24:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 75596 00:06:27.116 14:24:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:27.116 14:24:39 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 75596 ']' 00:06:27.116 14:24:39 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.116 14:24:39 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.116 14:24:39 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.116 14:24:39 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.116 14:24:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.116 [2024-07-10 14:24:39.397734] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:27.116 [2024-07-10 14:24:39.397836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75596 ] 00:06:27.375 [2024-07-10 14:24:39.518896] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.375 [2024-07-10 14:24:39.538254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.375 [2024-07-10 14:24:39.579613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.633 14:24:39 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.633 14:24:39 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:27.633 14:24:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:27.892 14:24:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 75596 00:06:27.892 14:24:40 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 75596 ']' 00:06:27.892 14:24:40 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 75596 00:06:27.892 14:24:40 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:27.892 14:24:40 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.892 14:24:40 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75596 00:06:27.892 14:24:40 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.892 killing process with pid 75596 00:06:27.892 14:24:40 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.892 14:24:40 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75596' 00:06:27.892 14:24:40 alias_rpc -- common/autotest_common.sh@967 -- # kill 75596 00:06:27.892 14:24:40 alias_rpc -- common/autotest_common.sh@972 -- # wait 75596 00:06:28.150 00:06:28.150 real 0m1.066s 00:06:28.150 user 0m1.259s 00:06:28.150 sys 0m0.310s 00:06:28.150 14:24:40 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.150 ************************************ 00:06:28.150 END TEST alias_rpc 00:06:28.150 ************************************ 00:06:28.150 14:24:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.150 14:24:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.150 14:24:40 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:06:28.150 14:24:40 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:28.150 14:24:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.150 14:24:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.150 14:24:40 -- common/autotest_common.sh@10 -- # set +x 00:06:28.150 ************************************ 00:06:28.150 START TEST dpdk_mem_utility 00:06:28.150 ************************************ 00:06:28.150 14:24:40 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:28.409 * Looking for test storage... 00:06:28.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:28.409 14:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:28.409 14:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=75669 00:06:28.409 14:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 75669 00:06:28.409 14:24:40 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 75669 ']' 00:06:28.409 14:24:40 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.409 14:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:28.409 14:24:40 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.409 14:24:40 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.409 14:24:40 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.409 14:24:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.409 [2024-07-10 14:24:40.517753] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:28.409 [2024-07-10 14:24:40.517864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75669 ] 00:06:28.409 [2024-07-10 14:24:40.639843] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.409 [2024-07-10 14:24:40.658316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.668 [2024-07-10 14:24:40.699812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.668 14:24:40 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.668 14:24:40 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:28.668 14:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:28.668 14:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:28.668 14:24:40 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.668 14:24:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.668 { 00:06:28.668 "filename": "/tmp/spdk_mem_dump.txt" 00:06:28.668 } 00:06:28.668 14:24:40 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.668 14:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:28.668 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:28.668 1 heaps totaling size 814.000000 MiB 00:06:28.668 size: 814.000000 MiB heap id: 0 00:06:28.668 end heaps---------- 00:06:28.668 8 mempools totaling size 598.116089 MiB 00:06:28.668 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:28.668 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:28.668 size: 84.521057 MiB name: bdev_io_75669 00:06:28.668 size: 51.011292 MiB name: evtpool_75669 00:06:28.668 size: 50.003479 MiB name: msgpool_75669 00:06:28.668 size: 21.763794 MiB name: PDU_Pool 00:06:28.668 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:28.668 size: 0.026123 MiB name: Session_Pool 00:06:28.668 end mempools------- 00:06:28.668 6 memzones totaling size 4.142822 MiB 00:06:28.668 size: 1.000366 MiB name: RG_ring_0_75669 00:06:28.668 size: 1.000366 MiB name: RG_ring_1_75669 00:06:28.668 size: 1.000366 MiB name: RG_ring_4_75669 00:06:28.668 size: 1.000366 MiB name: RG_ring_5_75669 00:06:28.668 size: 0.125366 MiB name: RG_ring_2_75669 00:06:28.668 size: 0.015991 MiB name: RG_ring_3_75669 00:06:28.668 end memzones------- 00:06:28.668 14:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:28.928 heap id: 0 total size: 814.000000 MiB number of busy elements: 219 number of free elements: 15 00:06:28.928 list of free elements. size: 12.486755 MiB 00:06:28.928 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:28.928 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:28.928 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:28.928 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:28.928 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:28.928 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:28.928 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:28.928 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:28.928 element at address: 0x200000200000 with size: 0.837036 MiB 00:06:28.928 element at address: 0x20001aa00000 with size: 0.572998 MiB 00:06:28.928 element at address: 0x20000b200000 with size: 0.489807 MiB 00:06:28.928 element at address: 0x200000800000 with size: 0.487061 MiB 00:06:28.928 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:28.928 element at address: 0x200027e00000 with size: 0.398315 MiB 00:06:28.928 element at address: 0x200003a00000 with size: 0.350769 MiB 00:06:28.928 list of standard malloc elements. size: 199.250671 MiB 00:06:28.928 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:28.928 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:28.928 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:28.928 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:28.928 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:28.929 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:28.929 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:28.929 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:28.929 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:28.929 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:28.929 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e66040 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:28.929 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:28.930 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:28.930 list of memzone associated elements. size: 602.262573 MiB 00:06:28.930 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:28.930 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:28.930 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:28.930 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:28.930 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:28.930 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_75669_0 00:06:28.930 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:28.930 associated memzone info: size: 48.002930 MiB name: MP_evtpool_75669_0 00:06:28.930 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:28.930 associated memzone info: size: 48.002930 MiB name: MP_msgpool_75669_0 00:06:28.930 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:28.930 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:28.930 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:28.930 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:28.930 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:28.930 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_75669 00:06:28.930 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:28.930 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_75669 00:06:28.930 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:28.930 associated memzone info: size: 1.007996 MiB name: MP_evtpool_75669 00:06:28.930 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:28.930 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:28.930 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:28.930 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:28.930 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:28.930 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:28.930 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:28.930 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:28.930 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:28.930 associated memzone info: size: 1.000366 MiB name: RG_ring_0_75669 00:06:28.930 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:28.930 associated memzone info: size: 1.000366 MiB name: RG_ring_1_75669 00:06:28.930 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:28.930 associated memzone info: size: 1.000366 MiB name: RG_ring_4_75669 00:06:28.930 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:28.930 associated memzone info: size: 1.000366 MiB name: RG_ring_5_75669 00:06:28.930 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:28.930 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_75669 00:06:28.930 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:28.930 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:28.930 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:28.930 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:28.930 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:28.930 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:28.930 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:28.930 associated memzone info: size: 0.125366 MiB name: RG_ring_2_75669 00:06:28.930 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:28.930 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:28.930 element at address: 0x200027e66100 with size: 0.023743 MiB 00:06:28.930 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:28.930 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:28.930 associated memzone info: size: 0.015991 MiB name: RG_ring_3_75669 00:06:28.930 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:06:28.930 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:28.930 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:06:28.930 associated memzone info: size: 0.000183 MiB name: MP_msgpool_75669 00:06:28.930 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:28.930 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_75669 00:06:28.930 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:06:28.930 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:28.930 14:24:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:28.930 14:24:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 75669 00:06:28.930 14:24:41 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 75669 ']' 00:06:28.930 14:24:41 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 75669 00:06:28.930 14:24:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:28.930 14:24:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.930 14:24:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75669 00:06:28.930 14:24:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.930 killing process with pid 75669 00:06:28.930 14:24:41 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.930 14:24:41 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75669' 00:06:28.930 14:24:41 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 75669 00:06:28.930 14:24:41 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 75669 00:06:29.189 00:06:29.189 real 0m0.924s 00:06:29.189 user 0m0.964s 00:06:29.189 sys 0m0.313s 00:06:29.189 14:24:41 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.189 ************************************ 00:06:29.189 END TEST dpdk_mem_utility 00:06:29.189 ************************************ 00:06:29.189 14:24:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.189 14:24:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.189 14:24:41 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:29.189 14:24:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.189 14:24:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.189 14:24:41 -- common/autotest_common.sh@10 -- # set +x 00:06:29.189 ************************************ 00:06:29.189 START TEST event 00:06:29.189 ************************************ 00:06:29.189 14:24:41 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:29.189 * Looking for test storage... 00:06:29.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:29.189 14:24:41 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:29.189 14:24:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:29.189 14:24:41 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:29.189 14:24:41 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:29.189 14:24:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.189 14:24:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.189 ************************************ 00:06:29.189 START TEST event_perf 00:06:29.189 ************************************ 00:06:29.189 14:24:41 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:29.189 Running I/O for 1 seconds...[2024-07-10 14:24:41.445953] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:29.190 [2024-07-10 14:24:41.446039] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75745 ] 00:06:29.448 [2024-07-10 14:24:41.565642] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:29.448 [2024-07-10 14:24:41.584147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.448 Running I/O for 1 seconds...[2024-07-10 14:24:41.623324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.448 [2024-07-10 14:24:41.623393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.448 [2024-07-10 14:24:41.623518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.448 [2024-07-10 14:24:41.623522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.822 00:06:30.822 lcore 0: 197344 00:06:30.822 lcore 1: 197345 00:06:30.822 lcore 2: 197345 00:06:30.822 lcore 3: 197344 00:06:30.822 done. 00:06:30.822 00:06:30.822 real 0m1.253s 00:06:30.822 user 0m4.087s 00:06:30.822 sys 0m0.045s 00:06:30.822 14:24:42 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.822 14:24:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.822 ************************************ 00:06:30.822 END TEST event_perf 00:06:30.822 ************************************ 00:06:30.822 14:24:42 event -- common/autotest_common.sh@1142 -- # return 0 00:06:30.822 14:24:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:30.822 14:24:42 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:30.822 14:24:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.822 14:24:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.822 ************************************ 00:06:30.822 START TEST event_reactor 00:06:30.822 ************************************ 00:06:30.822 14:24:42 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:30.822 [2024-07-10 14:24:42.742105] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:30.822 [2024-07-10 14:24:42.742211] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75778 ] 00:06:30.822 [2024-07-10 14:24:42.859069] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.822 [2024-07-10 14:24:42.878489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.822 [2024-07-10 14:24:42.916583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.757 test_start 00:06:31.757 oneshot 00:06:31.757 tick 100 00:06:31.757 tick 100 00:06:31.757 tick 250 00:06:31.757 tick 100 00:06:31.757 tick 100 00:06:31.757 tick 100 00:06:31.757 tick 250 00:06:31.757 tick 500 00:06:31.757 tick 100 00:06:31.757 tick 100 00:06:31.757 tick 250 00:06:31.757 tick 100 00:06:31.757 tick 100 00:06:31.757 test_end 00:06:31.757 00:06:31.757 real 0m1.247s 00:06:31.757 user 0m1.100s 00:06:31.757 sys 0m0.040s 00:06:31.757 14:24:43 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.757 ************************************ 00:06:31.757 END TEST event_reactor 00:06:31.757 ************************************ 00:06:31.757 14:24:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:31.757 14:24:44 event -- common/autotest_common.sh@1142 -- # return 0 00:06:31.757 14:24:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:31.757 14:24:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:31.757 14:24:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.757 14:24:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.757 ************************************ 00:06:31.757 START TEST event_reactor_perf 00:06:31.757 ************************************ 00:06:31.757 14:24:44 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:31.757 [2024-07-10 14:24:44.041716] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:31.757 [2024-07-10 14:24:44.041840] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75814 ] 00:06:32.015 [2024-07-10 14:24:44.164202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.015 [2024-07-10 14:24:44.185599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.015 [2024-07-10 14:24:44.226312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.389 test_start 00:06:33.389 test_end 00:06:33.389 Performance: 340909 events per second 00:06:33.389 ************************************ 00:06:33.389 END TEST event_reactor_perf 00:06:33.389 ************************************ 00:06:33.389 00:06:33.389 real 0m1.259s 00:06:33.389 user 0m1.104s 00:06:33.389 sys 0m0.048s 00:06:33.389 14:24:45 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.389 14:24:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.389 14:24:45 event -- common/autotest_common.sh@1142 -- # return 0 00:06:33.389 14:24:45 event -- event/event.sh@49 -- # uname -s 00:06:33.389 14:24:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:33.389 14:24:45 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:33.389 14:24:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.389 14:24:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.389 14:24:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.389 ************************************ 00:06:33.389 START TEST event_scheduler 00:06:33.389 ************************************ 00:06:33.389 14:24:45 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:33.389 * Looking for test storage... 00:06:33.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:33.389 14:24:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:33.389 14:24:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=75870 00:06:33.389 14:24:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.389 14:24:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:33.389 14:24:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 75870 00:06:33.389 14:24:45 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 75870 ']' 00:06:33.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.389 14:24:45 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.389 14:24:45 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.389 14:24:45 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.389 14:24:45 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.389 14:24:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.390 [2024-07-10 14:24:45.466957] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:33.390 [2024-07-10 14:24:45.467061] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75870 ] 00:06:33.390 [2024-07-10 14:24:45.592067] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.390 [2024-07-10 14:24:45.611227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.390 [2024-07-10 14:24:45.669257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.390 [2024-07-10 14:24:45.669370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.390 [2024-07-10 14:24:45.669416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.390 [2024-07-10 14:24:45.669425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.649 14:24:45 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.649 14:24:45 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:33.649 14:24:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:33.649 14:24:45 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.649 14:24:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.649 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.649 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.649 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.649 POWER: Cannot set governor of lcore 0 to performance 00:06:33.649 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.649 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.649 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.649 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.649 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:33.649 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:33.649 POWER: Unable to set Power Management Environment for lcore 0 00:06:33.649 [2024-07-10 14:24:45.749651] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:33.649 [2024-07-10 14:24:45.749857] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:33.649 [2024-07-10 14:24:45.750066] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:33.649 [2024-07-10 14:24:45.750302] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:33.649 [2024-07-10 14:24:45.750509] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:33.649 [2024-07-10 14:24:45.750724] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:33.649 14:24:45 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.649 14:24:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:33.649 14:24:45 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.649 14:24:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.649 [2024-07-10 14:24:45.803132] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:33.649 14:24:45 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.649 14:24:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:33.649 14:24:45 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.649 14:24:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.649 14:24:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.649 ************************************ 00:06:33.649 START TEST scheduler_create_thread 00:06:33.649 ************************************ 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.649 2 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.649 3 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.649 4 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.649 5 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.649 6 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.649 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.649 7 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.650 8 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.650 9 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.650 10 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.650 14:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.027 14:24:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.027 00:06:35.027 real 0m1.172s 00:06:35.027 user 0m0.012s 00:06:35.027 sys 0m0.005s 00:06:35.027 ************************************ 00:06:35.027 END TEST scheduler_create_thread 00:06:35.027 ************************************ 00:06:35.027 14:24:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.027 14:24:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.027 14:24:47 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:35.027 14:24:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:35.027 14:24:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 75870 00:06:35.027 14:24:47 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 75870 ']' 00:06:35.027 14:24:47 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 75870 00:06:35.027 14:24:47 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:35.027 14:24:47 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.027 14:24:47 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75870 00:06:35.027 killing process with pid 75870 00:06:35.027 14:24:47 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:35.027 14:24:47 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:35.027 14:24:47 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75870' 00:06:35.027 14:24:47 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 75870 00:06:35.027 14:24:47 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 75870 00:06:35.286 [2024-07-10 14:24:47.465759] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:35.545 ************************************ 00:06:35.545 END TEST event_scheduler 00:06:35.545 ************************************ 00:06:35.545 00:06:35.545 real 0m2.278s 00:06:35.545 user 0m2.607s 00:06:35.545 sys 0m0.284s 00:06:35.545 14:24:47 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.545 14:24:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.545 14:24:47 event -- common/autotest_common.sh@1142 -- # return 0 00:06:35.545 14:24:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:35.545 14:24:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:35.545 14:24:47 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.546 14:24:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.546 14:24:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.546 ************************************ 00:06:35.546 START TEST app_repeat 00:06:35.546 ************************************ 00:06:35.546 14:24:47 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:35.546 Process app_repeat pid: 75957 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=75957 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 75957' 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:35.546 spdk_app_start Round 0 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:35.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.546 14:24:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75957 /var/tmp/spdk-nbd.sock 00:06:35.546 14:24:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75957 ']' 00:06:35.546 14:24:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.546 14:24:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.546 14:24:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.546 14:24:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.546 14:24:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.546 [2024-07-10 14:24:47.693563] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:35.546 [2024-07-10 14:24:47.693889] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75957 ] 00:06:35.546 [2024-07-10 14:24:47.815051] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:35.546 [2024-07-10 14:24:47.833372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.803 [2024-07-10 14:24:47.879595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.803 [2024-07-10 14:24:47.879606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.803 14:24:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.803 14:24:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:35.803 14:24:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.062 Malloc0 00:06:36.062 14:24:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.321 Malloc1 00:06:36.321 14:24:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.321 14:24:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.580 /dev/nbd0 00:06:36.580 14:24:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.580 14:24:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.580 14:24:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:36.580 14:24:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:36.580 14:24:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:36.580 14:24:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:36.580 14:24:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:36.580 14:24:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:36.580 14:24:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:36.580 14:24:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:36.581 14:24:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.581 1+0 records in 00:06:36.581 1+0 records out 00:06:36.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315672 s, 13.0 MB/s 00:06:36.581 14:24:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.581 14:24:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:36.581 14:24:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.581 14:24:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:36.581 14:24:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:36.581 14:24:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.581 14:24:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.581 14:24:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.840 /dev/nbd1 00:06:37.099 14:24:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.099 14:24:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.099 1+0 records in 00:06:37.099 1+0 records out 00:06:37.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315728 s, 13.0 MB/s 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.099 14:24:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:37.099 14:24:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.099 14:24:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.099 14:24:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.099 14:24:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.099 14:24:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.356 { 00:06:37.356 "bdev_name": "Malloc0", 00:06:37.356 "nbd_device": "/dev/nbd0" 00:06:37.356 }, 00:06:37.356 { 00:06:37.356 "bdev_name": "Malloc1", 00:06:37.356 "nbd_device": "/dev/nbd1" 00:06:37.356 } 00:06:37.356 ]' 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.356 { 00:06:37.356 "bdev_name": "Malloc0", 00:06:37.356 "nbd_device": "/dev/nbd0" 00:06:37.356 }, 00:06:37.356 { 00:06:37.356 "bdev_name": "Malloc1", 00:06:37.356 "nbd_device": "/dev/nbd1" 00:06:37.356 } 00:06:37.356 ]' 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.356 /dev/nbd1' 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.356 /dev/nbd1' 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.356 256+0 records in 00:06:37.356 256+0 records out 00:06:37.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470398 s, 223 MB/s 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.356 256+0 records in 00:06:37.356 256+0 records out 00:06:37.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258295 s, 40.6 MB/s 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.356 256+0 records in 00:06:37.356 256+0 records out 00:06:37.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277611 s, 37.8 MB/s 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.356 14:24:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.614 14:24:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.614 14:24:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.614 14:24:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.614 14:24:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.614 14:24:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.614 14:24:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.614 14:24:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.614 14:24:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.614 14:24:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.614 14:24:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.872 14:24:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.872 14:24:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.872 14:24:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.872 14:24:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.872 14:24:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.872 14:24:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.872 14:24:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.872 14:24:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.872 14:24:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.872 14:24:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.872 14:24:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.131 14:24:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.131 14:24:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.131 14:24:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.390 14:24:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.390 14:24:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.390 14:24:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.390 14:24:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.390 14:24:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.390 14:24:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.390 14:24:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.390 14:24:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.390 14:24:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.390 14:24:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.648 14:24:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:38.648 [2024-07-10 14:24:50.885223] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.648 [2024-07-10 14:24:50.924199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.648 [2024-07-10 14:24:50.924215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.907 [2024-07-10 14:24:50.953973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.907 [2024-07-10 14:24:50.954022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.192 spdk_app_start Round 1 00:06:42.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.192 14:24:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:42.192 14:24:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:42.192 14:24:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75957 /var/tmp/spdk-nbd.sock 00:06:42.192 14:24:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75957 ']' 00:06:42.192 14:24:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.192 14:24:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.192 14:24:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.192 14:24:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.192 14:24:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.192 14:24:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.192 14:24:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:42.192 14:24:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.192 Malloc0 00:06:42.192 14:24:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.452 Malloc1 00:06:42.452 14:24:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.452 14:24:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:42.711 /dev/nbd0 00:06:42.711 14:24:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:42.711 14:24:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.711 1+0 records in 00:06:42.711 1+0 records out 00:06:42.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271769 s, 15.1 MB/s 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:42.711 14:24:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:42.711 14:24:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.711 14:24:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.711 14:24:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.016 /dev/nbd1 00:06:43.016 14:24:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.016 14:24:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.016 1+0 records in 00:06:43.016 1+0 records out 00:06:43.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361922 s, 11.3 MB/s 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:43.016 14:24:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:43.016 14:24:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.016 14:24:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.016 14:24:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.016 14:24:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.016 14:24:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.584 { 00:06:43.584 "bdev_name": "Malloc0", 00:06:43.584 "nbd_device": "/dev/nbd0" 00:06:43.584 }, 00:06:43.584 { 00:06:43.584 "bdev_name": "Malloc1", 00:06:43.584 "nbd_device": "/dev/nbd1" 00:06:43.584 } 00:06:43.584 ]' 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.584 { 00:06:43.584 "bdev_name": "Malloc0", 00:06:43.584 "nbd_device": "/dev/nbd0" 00:06:43.584 }, 00:06:43.584 { 00:06:43.584 "bdev_name": "Malloc1", 00:06:43.584 "nbd_device": "/dev/nbd1" 00:06:43.584 } 00:06:43.584 ]' 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:43.584 /dev/nbd1' 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:43.584 /dev/nbd1' 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:43.584 256+0 records in 00:06:43.584 256+0 records out 00:06:43.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00689449 s, 152 MB/s 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:43.584 256+0 records in 00:06:43.584 256+0 records out 00:06:43.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256441 s, 40.9 MB/s 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:43.584 256+0 records in 00:06:43.584 256+0 records out 00:06:43.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281106 s, 37.3 MB/s 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.584 14:24:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:43.844 14:24:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.844 14:24:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.844 14:24:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.844 14:24:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.844 14:24:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.844 14:24:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.844 14:24:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:43.844 14:24:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.844 14:24:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.844 14:24:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.104 14:24:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.104 14:24:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.104 14:24:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.104 14:24:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.104 14:24:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.104 14:24:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.104 14:24:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.104 14:24:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.104 14:24:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.104 14:24:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.104 14:24:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.362 14:24:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.362 14:24:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.362 14:24:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.619 14:24:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.619 14:24:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.619 14:24:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.619 14:24:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:44.619 14:24:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.619 14:24:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.619 14:24:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.619 14:24:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.619 14:24:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.619 14:24:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:44.877 14:24:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:44.877 [2024-07-10 14:24:57.107217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.877 [2024-07-10 14:24:57.145404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.877 [2024-07-10 14:24:57.145417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.135 [2024-07-10 14:24:57.176542] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:45.135 [2024-07-10 14:24:57.176621] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.417 spdk_app_start Round 2 00:06:48.417 14:25:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.417 14:25:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:48.417 14:25:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75957 /var/tmp/spdk-nbd.sock 00:06:48.417 14:25:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75957 ']' 00:06:48.417 14:25:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.417 14:25:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.418 14:25:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.418 14:25:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.418 14:25:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.418 14:25:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.418 14:25:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:48.418 14:25:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.418 Malloc0 00:06:48.418 14:25:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.674 Malloc1 00:06:48.674 14:25:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.674 14:25:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:48.931 /dev/nbd0 00:06:48.931 14:25:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:48.931 14:25:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:48.931 1+0 records in 00:06:48.931 1+0 records out 00:06:48.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382325 s, 10.7 MB/s 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:48.931 14:25:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:48.931 14:25:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.932 14:25:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.932 14:25:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.190 /dev/nbd1 00:06:49.190 14:25:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.190 14:25:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.190 14:25:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:49.190 14:25:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:49.190 14:25:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.190 14:25:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.190 14:25:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:49.190 14:25:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:49.190 14:25:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.190 14:25:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.190 14:25:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.190 1+0 records in 00:06:49.190 1+0 records out 00:06:49.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457891 s, 8.9 MB/s 00:06:49.190 14:25:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.190 14:25:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:49.190 14:25:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.448 14:25:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.448 14:25:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:49.448 14:25:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.448 14:25:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.448 14:25:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.448 14:25:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.448 14:25:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.448 14:25:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.448 { 00:06:49.448 "bdev_name": "Malloc0", 00:06:49.448 "nbd_device": "/dev/nbd0" 00:06:49.448 }, 00:06:49.448 { 00:06:49.448 "bdev_name": "Malloc1", 00:06:49.448 "nbd_device": "/dev/nbd1" 00:06:49.448 } 00:06:49.448 ]' 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.706 { 00:06:49.706 "bdev_name": "Malloc0", 00:06:49.706 "nbd_device": "/dev/nbd0" 00:06:49.706 }, 00:06:49.706 { 00:06:49.706 "bdev_name": "Malloc1", 00:06:49.706 "nbd_device": "/dev/nbd1" 00:06:49.706 } 00:06:49.706 ]' 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.706 /dev/nbd1' 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.706 /dev/nbd1' 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:49.706 256+0 records in 00:06:49.706 256+0 records out 00:06:49.706 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00660356 s, 159 MB/s 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.706 256+0 records in 00:06:49.706 256+0 records out 00:06:49.706 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257198 s, 40.8 MB/s 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.706 256+0 records in 00:06:49.706 256+0 records out 00:06:49.706 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026916 s, 39.0 MB/s 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.706 14:25:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:49.707 14:25:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:49.707 14:25:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:49.707 14:25:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.707 14:25:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.707 14:25:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.707 14:25:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:49.707 14:25:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.707 14:25:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.964 14:25:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.964 14:25:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.964 14:25:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.964 14:25:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.964 14:25:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.964 14:25:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.964 14:25:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.964 14:25:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.964 14:25:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.964 14:25:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.223 14:25:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.223 14:25:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.223 14:25:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.223 14:25:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.223 14:25:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.223 14:25:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.223 14:25:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.223 14:25:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.223 14:25:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.223 14:25:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.223 14:25:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.481 14:25:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.481 14:25:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.481 14:25:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.481 14:25:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.481 14:25:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.481 14:25:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.481 14:25:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:50.481 14:25:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.481 14:25:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.481 14:25:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.481 14:25:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.481 14:25:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.481 14:25:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:50.738 14:25:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:50.995 [2024-07-10 14:25:03.042293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.995 [2024-07-10 14:25:03.079159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.995 [2024-07-10 14:25:03.079165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.995 [2024-07-10 14:25:03.108510] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:50.995 [2024-07-10 14:25:03.108570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.332 14:25:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 75957 /var/tmp/spdk-nbd.sock 00:06:54.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.332 14:25:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75957 ']' 00:06:54.332 14:25:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.332 14:25:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.332 14:25:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.332 14:25:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.332 14:25:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:54.332 14:25:06 event.app_repeat -- event/event.sh@39 -- # killprocess 75957 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 75957 ']' 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 75957 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75957 00:06:54.332 killing process with pid 75957 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75957' 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@967 -- # kill 75957 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@972 -- # wait 75957 00:06:54.332 spdk_app_start is called in Round 0. 00:06:54.332 Shutdown signal received, stop current app iteration 00:06:54.332 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 reinitialization... 00:06:54.332 spdk_app_start is called in Round 1. 00:06:54.332 Shutdown signal received, stop current app iteration 00:06:54.332 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 reinitialization... 00:06:54.332 spdk_app_start is called in Round 2. 00:06:54.332 Shutdown signal received, stop current app iteration 00:06:54.332 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 reinitialization... 00:06:54.332 spdk_app_start is called in Round 3. 00:06:54.332 Shutdown signal received, stop current app iteration 00:06:54.332 14:25:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:54.332 14:25:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:54.332 00:06:54.332 real 0m18.704s 00:06:54.332 user 0m42.717s 00:06:54.332 sys 0m2.834s 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.332 14:25:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.332 ************************************ 00:06:54.332 END TEST app_repeat 00:06:54.332 ************************************ 00:06:54.332 14:25:06 event -- common/autotest_common.sh@1142 -- # return 0 00:06:54.332 14:25:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:54.332 14:25:06 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:54.332 14:25:06 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.332 14:25:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.332 14:25:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.332 ************************************ 00:06:54.332 START TEST cpu_locks 00:06:54.332 ************************************ 00:06:54.333 14:25:06 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:54.333 * Looking for test storage... 00:06:54.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:54.333 14:25:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:54.333 14:25:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:54.333 14:25:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:54.333 14:25:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:54.333 14:25:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.333 14:25:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.333 14:25:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.333 ************************************ 00:06:54.333 START TEST default_locks 00:06:54.333 ************************************ 00:06:54.333 14:25:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:54.333 14:25:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=76574 00:06:54.333 14:25:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 76574 00:06:54.333 14:25:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.333 14:25:06 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 76574 ']' 00:06:54.333 14:25:06 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.333 14:25:06 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.333 14:25:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.333 14:25:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.333 14:25:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.333 [2024-07-10 14:25:06.556073] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:54.333 [2024-07-10 14:25:06.556173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76574 ] 00:06:54.591 [2024-07-10 14:25:06.674181] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.592 [2024-07-10 14:25:06.688889] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.592 [2024-07-10 14:25:06.725222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.592 14:25:06 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.592 14:25:06 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:54.592 14:25:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 76574 00:06:54.592 14:25:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 76574 00:06:54.592 14:25:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.159 14:25:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 76574 00:06:55.159 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 76574 ']' 00:06:55.159 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 76574 00:06:55.159 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:55.159 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.159 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76574 00:06:55.159 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.159 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.159 killing process with pid 76574 00:06:55.159 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76574' 00:06:55.159 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 76574 00:06:55.159 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 76574 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 76574 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76574 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 76574 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 76574 ']' 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.418 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (76574) - No such process 00:06:55.418 ERROR: process (pid: 76574) is no longer running 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:55.418 00:06:55.418 real 0m0.996s 00:06:55.418 user 0m1.019s 00:06:55.418 sys 0m0.384s 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.418 14:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.418 ************************************ 00:06:55.418 END TEST default_locks 00:06:55.418 ************************************ 00:06:55.418 14:25:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:55.418 14:25:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:55.418 14:25:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.418 14:25:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.418 14:25:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.418 ************************************ 00:06:55.418 START TEST default_locks_via_rpc 00:06:55.418 ************************************ 00:06:55.418 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:55.418 14:25:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=76619 00:06:55.419 14:25:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 76619 00:06:55.419 14:25:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.419 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76619 ']' 00:06:55.419 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.419 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.419 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.419 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.419 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.419 [2024-07-10 14:25:07.614832] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:55.419 [2024-07-10 14:25:07.614983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76619 ] 00:06:55.678 [2024-07-10 14:25:07.736788] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:55.678 [2024-07-10 14:25:07.754844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.678 [2024-07-10 14:25:07.790366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.678 14:25:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 76619 00:06:55.936 14:25:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 76619 00:06:55.936 14:25:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.195 14:25:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 76619 00:06:56.195 14:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 76619 ']' 00:06:56.195 14:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 76619 00:06:56.195 14:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:56.195 14:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.195 14:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76619 00:06:56.195 14:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.195 14:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.195 killing process with pid 76619 00:06:56.195 14:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76619' 00:06:56.195 14:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 76619 00:06:56.195 14:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 76619 00:06:56.455 00:06:56.455 real 0m1.106s 00:06:56.455 user 0m1.195s 00:06:56.455 sys 0m0.427s 00:06:56.455 14:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.455 14:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.455 ************************************ 00:06:56.455 END TEST default_locks_via_rpc 00:06:56.455 ************************************ 00:06:56.455 14:25:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:56.455 14:25:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:56.455 14:25:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.455 14:25:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.455 14:25:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.455 ************************************ 00:06:56.455 START TEST non_locking_app_on_locked_coremask 00:06:56.455 ************************************ 00:06:56.455 14:25:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:56.455 14:25:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=76669 00:06:56.455 14:25:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 76669 /var/tmp/spdk.sock 00:06:56.455 14:25:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 76669 ']' 00:06:56.455 14:25:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.455 14:25:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.455 14:25:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.455 14:25:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.455 14:25:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.455 14:25:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.714 [2024-07-10 14:25:08.766795] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:56.714 [2024-07-10 14:25:08.766912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76669 ] 00:06:56.714 [2024-07-10 14:25:08.885069] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.714 [2024-07-10 14:25:08.906276] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.714 [2024-07-10 14:25:08.946616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.973 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.973 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:56.973 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=76678 00:06:56.973 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 76678 /var/tmp/spdk2.sock 00:06:56.973 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:56.973 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 76678 ']' 00:06:56.973 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.973 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.973 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.973 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.973 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.973 [2024-07-10 14:25:09.162325] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:56.973 [2024-07-10 14:25:09.162430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76678 ] 00:06:57.231 [2024-07-10 14:25:09.281817] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:57.231 [2024-07-10 14:25:09.305025] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.231 [2024-07-10 14:25:09.305081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.231 [2024-07-10 14:25:09.375845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.489 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.489 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:57.489 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 76669 00:06:57.489 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76669 00:06:57.489 14:25:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.424 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 76669 00:06:58.424 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 76669 ']' 00:06:58.424 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 76669 00:06:58.424 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:58.424 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.424 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76669 00:06:58.424 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:58.424 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:58.424 killing process with pid 76669 00:06:58.424 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76669' 00:06:58.424 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 76669 00:06:58.424 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 76669 00:06:58.683 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 76678 00:06:58.683 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 76678 ']' 00:06:58.683 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 76678 00:06:58.683 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:58.683 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.683 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76678 00:06:58.683 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:58.683 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:58.683 killing process with pid 76678 00:06:58.683 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76678' 00:06:58.683 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 76678 00:06:58.683 14:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 76678 00:06:58.941 00:06:58.941 real 0m2.366s 00:06:58.942 user 0m2.641s 00:06:58.942 sys 0m0.797s 00:06:58.942 14:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.942 14:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.942 ************************************ 00:06:58.942 END TEST non_locking_app_on_locked_coremask 00:06:58.942 ************************************ 00:06:58.942 14:25:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:58.942 14:25:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:58.942 14:25:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.942 14:25:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.942 14:25:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.942 ************************************ 00:06:58.942 START TEST locking_app_on_unlocked_coremask 00:06:58.942 ************************************ 00:06:58.942 14:25:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:58.942 14:25:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=76738 00:06:58.942 14:25:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 76738 /var/tmp/spdk.sock 00:06:58.942 14:25:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:58.942 14:25:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 76738 ']' 00:06:58.942 14:25:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.942 14:25:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.942 14:25:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.942 14:25:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.942 14:25:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.942 [2024-07-10 14:25:11.185717] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:06:58.942 [2024-07-10 14:25:11.185817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76738 ] 00:06:59.200 [2024-07-10 14:25:11.308799] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.200 [2024-07-10 14:25:11.320750] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.200 [2024-07-10 14:25:11.320801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.200 [2024-07-10 14:25:11.356532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.163 14:25:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.163 14:25:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:00.163 14:25:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=76766 00:07:00.163 14:25:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 76766 /var/tmp/spdk2.sock 00:07:00.163 14:25:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.163 14:25:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 76766 ']' 00:07:00.163 14:25:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.163 14:25:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.163 14:25:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.163 14:25:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.163 14:25:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.163 [2024-07-10 14:25:12.249661] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:00.163 [2024-07-10 14:25:12.249770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76766 ] 00:07:00.163 [2024-07-10 14:25:12.368823] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.163 [2024-07-10 14:25:12.393031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.422 [2024-07-10 14:25:12.464849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.989 14:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.989 14:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:00.989 14:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 76766 00:07:00.989 14:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76766 00:07:00.989 14:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.925 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 76738 00:07:01.925 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 76738 ']' 00:07:01.925 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 76738 00:07:01.925 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:01.925 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.925 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76738 00:07:01.925 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.925 killing process with pid 76738 00:07:01.925 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.925 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76738' 00:07:01.925 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 76738 00:07:01.925 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 76738 00:07:02.493 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 76766 00:07:02.493 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 76766 ']' 00:07:02.493 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 76766 00:07:02.493 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:02.493 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.493 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76766 00:07:02.493 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.493 killing process with pid 76766 00:07:02.493 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.493 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76766' 00:07:02.493 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 76766 00:07:02.493 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 76766 00:07:02.752 00:07:02.752 real 0m3.737s 00:07:02.752 user 0m4.400s 00:07:02.752 sys 0m1.001s 00:07:02.752 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.752 14:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.752 ************************************ 00:07:02.752 END TEST locking_app_on_unlocked_coremask 00:07:02.752 ************************************ 00:07:02.752 14:25:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:02.752 14:25:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:02.752 14:25:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.752 14:25:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.752 14:25:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.752 ************************************ 00:07:02.752 START TEST locking_app_on_locked_coremask 00:07:02.752 ************************************ 00:07:02.752 14:25:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:02.752 14:25:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=76845 00:07:02.752 14:25:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 76845 /var/tmp/spdk.sock 00:07:02.752 14:25:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.752 14:25:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 76845 ']' 00:07:02.752 14:25:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.752 14:25:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.752 14:25:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.752 14:25:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.752 14:25:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.752 [2024-07-10 14:25:14.963636] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:02.752 [2024-07-10 14:25:14.963739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76845 ] 00:07:03.011 [2024-07-10 14:25:15.084850] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.011 [2024-07-10 14:25:15.106099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.011 [2024-07-10 14:25:15.148942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=76854 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 76854 /var/tmp/spdk2.sock 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76854 /var/tmp/spdk2.sock 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76854 /var/tmp/spdk2.sock 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 76854 ']' 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.270 14:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.270 [2024-07-10 14:25:15.377640] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:03.270 [2024-07-10 14:25:15.377735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76854 ] 00:07:03.270 [2024-07-10 14:25:15.500852] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.270 [2024-07-10 14:25:15.525369] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 76845 has claimed it. 00:07:03.270 [2024-07-10 14:25:15.525454] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:03.835 ERROR: process (pid: 76854) is no longer running 00:07:03.835 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (76854) - No such process 00:07:03.835 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.835 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:03.835 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:03.835 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.835 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:03.835 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.835 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 76845 00:07:03.835 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76845 00:07:03.835 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.401 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 76845 00:07:04.401 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 76845 ']' 00:07:04.401 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 76845 00:07:04.401 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:04.401 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:04.401 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76845 00:07:04.401 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:04.401 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:04.401 killing process with pid 76845 00:07:04.401 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76845' 00:07:04.401 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 76845 00:07:04.401 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 76845 00:07:04.659 00:07:04.659 real 0m1.806s 00:07:04.659 user 0m2.137s 00:07:04.659 sys 0m0.472s 00:07:04.659 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.659 14:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.659 ************************************ 00:07:04.659 END TEST locking_app_on_locked_coremask 00:07:04.659 ************************************ 00:07:04.659 14:25:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:04.659 14:25:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:04.659 14:25:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.659 14:25:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.659 14:25:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.659 ************************************ 00:07:04.659 START TEST locking_overlapped_coremask 00:07:04.659 ************************************ 00:07:04.659 14:25:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:04.659 14:25:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=76910 00:07:04.659 14:25:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:04.659 14:25:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 76910 /var/tmp/spdk.sock 00:07:04.659 14:25:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 76910 ']' 00:07:04.660 14:25:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.660 14:25:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.660 14:25:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.660 14:25:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.660 14:25:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.660 [2024-07-10 14:25:16.839174] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:04.660 [2024-07-10 14:25:16.839305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76910 ] 00:07:04.918 [2024-07-10 14:25:16.960161] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.918 [2024-07-10 14:25:16.972457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.918 [2024-07-10 14:25:17.010901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.918 [2024-07-10 14:25:17.011027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.918 [2024-07-10 14:25:17.011035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.485 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=76936 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 76936 /var/tmp/spdk2.sock 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76936 /var/tmp/spdk2.sock 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76936 /var/tmp/spdk2.sock 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 76936 ']' 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.486 14:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.745 [2024-07-10 14:25:17.844200] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:05.745 [2024-07-10 14:25:17.844338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76936 ] 00:07:05.745 [2024-07-10 14:25:17.974103] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:05.745 [2024-07-10 14:25:17.997104] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76910 has claimed it. 00:07:05.745 [2024-07-10 14:25:17.997154] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.311 ERROR: process (pid: 76936) is no longer running 00:07:06.311 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (76936) - No such process 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 76910 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 76910 ']' 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 76910 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76910 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.311 killing process with pid 76910 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76910' 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 76910 00:07:06.311 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 76910 00:07:06.571 00:07:06.571 real 0m2.067s 00:07:06.571 user 0m6.010s 00:07:06.571 sys 0m0.337s 00:07:06.571 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.571 ************************************ 00:07:06.571 END TEST locking_overlapped_coremask 00:07:06.571 ************************************ 00:07:06.571 14:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.830 14:25:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:06.830 14:25:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:06.830 14:25:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.830 14:25:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.830 14:25:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.830 ************************************ 00:07:06.831 START TEST locking_overlapped_coremask_via_rpc 00:07:06.831 ************************************ 00:07:06.831 14:25:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:06.831 14:25:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=76986 00:07:06.831 14:25:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 76986 /var/tmp/spdk.sock 00:07:06.831 14:25:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:06.831 14:25:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76986 ']' 00:07:06.831 14:25:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.831 14:25:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.831 14:25:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.831 14:25:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.831 14:25:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.831 [2024-07-10 14:25:18.934308] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:06.831 [2024-07-10 14:25:18.934398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76986 ] 00:07:06.831 [2024-07-10 14:25:19.052743] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.831 [2024-07-10 14:25:19.067549] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.831 [2024-07-10 14:25:19.067619] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.831 [2024-07-10 14:25:19.105691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.831 [2024-07-10 14:25:19.105797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.831 [2024-07-10 14:25:19.105801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.090 14:25:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.090 14:25:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:07.090 14:25:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=76998 00:07:07.090 14:25:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:07.090 14:25:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 76998 /var/tmp/spdk2.sock 00:07:07.090 14:25:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76998 ']' 00:07:07.090 14:25:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.090 14:25:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.090 14:25:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.090 14:25:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.090 14:25:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.090 [2024-07-10 14:25:19.324597] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:07.090 [2024-07-10 14:25:19.324900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76998 ] 00:07:07.348 [2024-07-10 14:25:19.447217] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:07.348 [2024-07-10 14:25:19.477636] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.348 [2024-07-10 14:25:19.477702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.348 [2024-07-10 14:25:19.550953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.348 [2024-07-10 14:25:19.554349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.348 [2024-07-10 14:25:19.554349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.283 [2024-07-10 14:25:20.380437] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76986 has claimed it. 00:07:08.283 2024/07/10 14:25:20 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:07:08.283 request: 00:07:08.283 { 00:07:08.283 "method": "framework_enable_cpumask_locks", 00:07:08.283 "params": {} 00:07:08.283 } 00:07:08.283 Got JSON-RPC error response 00:07:08.283 GoRPCClient: error on JSON-RPC call 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 76986 /var/tmp/spdk.sock 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76986 ']' 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.283 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.543 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.543 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:08.543 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 76998 /var/tmp/spdk2.sock 00:07:08.543 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76998 ']' 00:07:08.543 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.543 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.543 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.543 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.543 14:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.801 ************************************ 00:07:08.801 END TEST locking_overlapped_coremask_via_rpc 00:07:08.801 ************************************ 00:07:08.801 14:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.801 14:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:08.801 14:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:08.801 14:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.801 14:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.801 14:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.801 00:07:08.801 real 0m2.183s 00:07:08.801 user 0m1.322s 00:07:08.801 sys 0m0.201s 00:07:08.801 14:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.801 14:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.060 14:25:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:09.060 14:25:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:09.060 14:25:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76986 ]] 00:07:09.060 14:25:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76986 00:07:09.060 14:25:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 76986 ']' 00:07:09.060 14:25:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 76986 00:07:09.060 14:25:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:09.060 14:25:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.060 14:25:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76986 00:07:09.060 killing process with pid 76986 00:07:09.060 14:25:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.060 14:25:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.060 14:25:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76986' 00:07:09.060 14:25:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 76986 00:07:09.060 14:25:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 76986 00:07:09.318 14:25:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76998 ]] 00:07:09.318 14:25:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76998 00:07:09.318 14:25:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 76998 ']' 00:07:09.318 14:25:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 76998 00:07:09.318 14:25:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:09.318 14:25:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.318 14:25:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76998 00:07:09.318 killing process with pid 76998 00:07:09.318 14:25:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:09.318 14:25:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:09.318 14:25:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76998' 00:07:09.318 14:25:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 76998 00:07:09.318 14:25:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 76998 00:07:09.318 14:25:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:09.577 14:25:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:09.577 14:25:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76986 ]] 00:07:09.577 14:25:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76986 00:07:09.577 14:25:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 76986 ']' 00:07:09.577 14:25:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 76986 00:07:09.577 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (76986) - No such process 00:07:09.577 Process with pid 76986 is not found 00:07:09.577 14:25:21 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 76986 is not found' 00:07:09.577 14:25:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76998 ]] 00:07:09.577 14:25:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76998 00:07:09.577 14:25:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 76998 ']' 00:07:09.577 14:25:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 76998 00:07:09.577 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (76998) - No such process 00:07:09.577 Process with pid 76998 is not found 00:07:09.577 14:25:21 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 76998 is not found' 00:07:09.577 14:25:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:09.577 00:07:09.577 real 0m15.200s 00:07:09.577 user 0m29.573s 00:07:09.577 sys 0m4.211s 00:07:09.577 14:25:21 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.577 ************************************ 00:07:09.577 14:25:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.577 END TEST cpu_locks 00:07:09.577 ************************************ 00:07:09.577 14:25:21 event -- common/autotest_common.sh@1142 -- # return 0 00:07:09.577 ************************************ 00:07:09.577 END TEST event 00:07:09.577 ************************************ 00:07:09.577 00:07:09.577 real 0m40.309s 00:07:09.577 user 1m21.314s 00:07:09.577 sys 0m7.684s 00:07:09.577 14:25:21 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.577 14:25:21 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.577 14:25:21 -- common/autotest_common.sh@1142 -- # return 0 00:07:09.578 14:25:21 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:09.578 14:25:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.578 14:25:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.578 14:25:21 -- common/autotest_common.sh@10 -- # set +x 00:07:09.578 ************************************ 00:07:09.578 START TEST thread 00:07:09.578 ************************************ 00:07:09.578 14:25:21 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:09.578 * Looking for test storage... 00:07:09.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:09.578 14:25:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:09.578 14:25:21 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:09.578 14:25:21 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.578 14:25:21 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.578 ************************************ 00:07:09.578 START TEST thread_poller_perf 00:07:09.578 ************************************ 00:07:09.578 14:25:21 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:09.578 [2024-07-10 14:25:21.799785] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:09.578 [2024-07-10 14:25:21.799875] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77145 ] 00:07:09.836 [2024-07-10 14:25:21.919053] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:09.836 [2024-07-10 14:25:21.938227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.836 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:09.836 [2024-07-10 14:25:21.974936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.772 ====================================== 00:07:10.772 busy:2206163389 (cyc) 00:07:10.772 total_run_count: 278000 00:07:10.772 tsc_hz: 2200000000 (cyc) 00:07:10.772 ====================================== 00:07:10.772 poller_cost: 7935 (cyc), 3606 (nsec) 00:07:10.772 00:07:10.772 real 0m1.251s 00:07:10.772 user 0m1.105s 00:07:10.772 sys 0m0.037s 00:07:10.772 14:25:23 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.772 14:25:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.772 ************************************ 00:07:10.772 END TEST thread_poller_perf 00:07:10.772 ************************************ 00:07:11.030 14:25:23 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:11.030 14:25:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:11.030 14:25:23 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:11.030 14:25:23 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.030 14:25:23 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.030 ************************************ 00:07:11.030 START TEST thread_poller_perf 00:07:11.030 ************************************ 00:07:11.030 14:25:23 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:11.030 [2024-07-10 14:25:23.103372] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:11.030 [2024-07-10 14:25:23.104037] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77180 ] 00:07:11.030 [2024-07-10 14:25:23.224718] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:11.030 [2024-07-10 14:25:23.244806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.030 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:11.030 [2024-07-10 14:25:23.280044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.451 ====================================== 00:07:12.451 busy:2202127931 (cyc) 00:07:12.451 total_run_count: 3935000 00:07:12.451 tsc_hz: 2200000000 (cyc) 00:07:12.451 ====================================== 00:07:12.451 poller_cost: 559 (cyc), 254 (nsec) 00:07:12.451 00:07:12.451 real 0m1.249s 00:07:12.451 user 0m1.099s 00:07:12.451 sys 0m0.042s 00:07:12.451 14:25:24 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.452 ************************************ 00:07:12.452 END TEST thread_poller_perf 00:07:12.452 ************************************ 00:07:12.452 14:25:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.452 14:25:24 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:12.452 14:25:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:12.452 ************************************ 00:07:12.452 END TEST thread 00:07:12.452 ************************************ 00:07:12.452 00:07:12.452 real 0m2.675s 00:07:12.452 user 0m2.266s 00:07:12.452 sys 0m0.189s 00:07:12.452 14:25:24 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.452 14:25:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.452 14:25:24 -- common/autotest_common.sh@1142 -- # return 0 00:07:12.452 14:25:24 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:12.452 14:25:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.452 14:25:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.452 14:25:24 -- common/autotest_common.sh@10 -- # set +x 00:07:12.452 ************************************ 00:07:12.452 START TEST accel 00:07:12.452 ************************************ 00:07:12.452 14:25:24 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:12.452 * Looking for test storage... 00:07:12.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:12.452 14:25:24 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:12.452 14:25:24 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:12.452 14:25:24 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:12.452 14:25:24 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=77249 00:07:12.452 14:25:24 accel -- accel/accel.sh@63 -- # waitforlisten 77249 00:07:12.452 14:25:24 accel -- common/autotest_common.sh@829 -- # '[' -z 77249 ']' 00:07:12.452 14:25:24 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.452 14:25:24 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.452 14:25:24 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.452 14:25:24 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:12.452 14:25:24 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.452 14:25:24 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:12.452 14:25:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.452 14:25:24 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.452 14:25:24 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.452 14:25:24 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.452 14:25:24 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.452 14:25:24 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.452 14:25:24 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:12.452 14:25:24 accel -- accel/accel.sh@41 -- # jq -r . 00:07:12.452 [2024-07-10 14:25:24.574928] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:12.452 [2024-07-10 14:25:24.575040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77249 ] 00:07:12.452 [2024-07-10 14:25:24.697496] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.452 [2024-07-10 14:25:24.711516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.711 [2024-07-10 14:25:24.748371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@862 -- # return 0 00:07:12.711 14:25:24 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:12.711 14:25:24 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:12.711 14:25:24 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:12.711 14:25:24 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:12.711 14:25:24 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:12.711 14:25:24 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.711 14:25:24 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # IFS== 00:07:12.711 14:25:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:12.711 14:25:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:12.711 14:25:24 accel -- accel/accel.sh@75 -- # killprocess 77249 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@948 -- # '[' -z 77249 ']' 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@952 -- # kill -0 77249 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@953 -- # uname 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77249 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:12.711 killing process with pid 77249 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77249' 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@967 -- # kill 77249 00:07:12.711 14:25:24 accel -- common/autotest_common.sh@972 -- # wait 77249 00:07:12.971 14:25:25 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:12.971 14:25:25 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:12.971 14:25:25 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:12.971 14:25:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.971 14:25:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.971 14:25:25 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:12.971 14:25:25 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:12.971 14:25:25 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:12.971 14:25:25 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.971 14:25:25 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.971 14:25:25 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.971 14:25:25 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.971 14:25:25 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.971 14:25:25 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:12.971 14:25:25 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:12.971 14:25:25 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.971 14:25:25 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:13.231 14:25:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.231 14:25:25 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:13.231 14:25:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:13.231 14:25:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.231 14:25:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.231 ************************************ 00:07:13.231 START TEST accel_missing_filename 00:07:13.231 ************************************ 00:07:13.231 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:13.231 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:13.231 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:13.231 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:13.231 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.231 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:13.231 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.231 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:13.231 14:25:25 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:13.231 14:25:25 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:13.231 14:25:25 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.231 14:25:25 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.231 14:25:25 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.231 14:25:25 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.231 14:25:25 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.231 14:25:25 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:13.231 14:25:25 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:13.231 [2024-07-10 14:25:25.322810] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:13.231 [2024-07-10 14:25:25.322905] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77305 ] 00:07:13.231 [2024-07-10 14:25:25.439402] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:13.231 [2024-07-10 14:25:25.458697] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.231 [2024-07-10 14:25:25.503406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.490 [2024-07-10 14:25:25.537897] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.490 [2024-07-10 14:25:25.581114] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:13.490 A filename is required. 00:07:13.490 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:13.490 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:13.490 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:13.490 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:13.490 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:13.490 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:13.490 00:07:13.490 real 0m0.341s 00:07:13.490 user 0m0.209s 00:07:13.490 sys 0m0.078s 00:07:13.490 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.490 14:25:25 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:13.490 ************************************ 00:07:13.490 END TEST accel_missing_filename 00:07:13.490 ************************************ 00:07:13.490 14:25:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.490 14:25:25 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.490 14:25:25 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:13.490 14:25:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.490 14:25:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.490 ************************************ 00:07:13.490 START TEST accel_compress_verify 00:07:13.490 ************************************ 00:07:13.490 14:25:25 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.490 14:25:25 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:13.490 14:25:25 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.490 14:25:25 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:13.490 14:25:25 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.490 14:25:25 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:13.490 14:25:25 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.490 14:25:25 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.490 14:25:25 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.490 14:25:25 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:13.490 14:25:25 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.490 14:25:25 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.490 14:25:25 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.490 14:25:25 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.490 14:25:25 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.490 14:25:25 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:13.490 14:25:25 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:13.490 [2024-07-10 14:25:25.712915] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:13.490 [2024-07-10 14:25:25.713034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77330 ] 00:07:13.748 [2024-07-10 14:25:25.832690] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:13.748 [2024-07-10 14:25:25.851119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.748 [2024-07-10 14:25:25.904673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.748 [2024-07-10 14:25:25.948077] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.748 [2024-07-10 14:25:25.993395] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:14.008 00:07:14.008 Compression does not support the verify option, aborting. 00:07:14.008 14:25:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:14.008 14:25:26 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.008 14:25:26 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:14.008 14:25:26 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:14.008 14:25:26 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:14.008 14:25:26 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.008 00:07:14.008 real 0m0.367s 00:07:14.008 user 0m0.219s 00:07:14.008 sys 0m0.093s 00:07:14.008 ************************************ 00:07:14.008 END TEST accel_compress_verify 00:07:14.008 14:25:26 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.008 14:25:26 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:14.008 ************************************ 00:07:14.008 14:25:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.008 14:25:26 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:14.008 14:25:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:14.008 14:25:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.008 14:25:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.008 ************************************ 00:07:14.008 START TEST accel_wrong_workload 00:07:14.008 ************************************ 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:14.008 14:25:26 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:14.008 14:25:26 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:14.008 14:25:26 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.008 14:25:26 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.008 14:25:26 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.008 14:25:26 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.008 14:25:26 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.008 14:25:26 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:14.008 14:25:26 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:14.008 Unsupported workload type: foobar 00:07:14.008 [2024-07-10 14:25:26.121602] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:14.008 accel_perf options: 00:07:14.008 [-h help message] 00:07:14.008 [-q queue depth per core] 00:07:14.008 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:14.008 [-T number of threads per core 00:07:14.008 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:14.008 [-t time in seconds] 00:07:14.008 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:14.008 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:14.008 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:14.008 [-l for compress/decompress workloads, name of uncompressed input file 00:07:14.008 [-S for crc32c workload, use this seed value (default 0) 00:07:14.008 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:14.008 [-f for fill workload, use this BYTE value (default 255) 00:07:14.008 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:14.008 [-y verify result if this switch is on] 00:07:14.008 [-a tasks to allocate per core (default: same value as -q)] 00:07:14.008 Can be used to spread operations across a wider range of memory. 00:07:14.008 ************************************ 00:07:14.008 END TEST accel_wrong_workload 00:07:14.008 ************************************ 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.008 00:07:14.008 real 0m0.025s 00:07:14.008 user 0m0.020s 00:07:14.008 sys 0m0.005s 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.008 14:25:26 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:14.008 14:25:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.008 14:25:26 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:14.008 14:25:26 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:14.008 14:25:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.008 14:25:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.008 ************************************ 00:07:14.008 START TEST accel_negative_buffers 00:07:14.008 ************************************ 00:07:14.008 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:14.008 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:14.008 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:14.008 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:14.008 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.008 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:14.008 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.008 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:14.008 14:25:26 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:14.008 14:25:26 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:14.008 14:25:26 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.008 14:25:26 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.008 14:25:26 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.009 14:25:26 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.009 14:25:26 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.009 14:25:26 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:14.009 14:25:26 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:14.009 -x option must be non-negative. 00:07:14.009 [2024-07-10 14:25:26.197903] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:14.009 accel_perf options: 00:07:14.009 [-h help message] 00:07:14.009 [-q queue depth per core] 00:07:14.009 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:14.009 [-T number of threads per core 00:07:14.009 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:14.009 [-t time in seconds] 00:07:14.009 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:14.009 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:14.009 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:14.009 [-l for compress/decompress workloads, name of uncompressed input file 00:07:14.009 [-S for crc32c workload, use this seed value (default 0) 00:07:14.009 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:14.009 [-f for fill workload, use this BYTE value (default 255) 00:07:14.009 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:14.009 [-y verify result if this switch is on] 00:07:14.009 [-a tasks to allocate per core (default: same value as -q)] 00:07:14.009 Can be used to spread operations across a wider range of memory. 00:07:14.009 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:14.009 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.009 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.009 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.009 00:07:14.009 real 0m0.032s 00:07:14.009 user 0m0.018s 00:07:14.009 sys 0m0.013s 00:07:14.009 ************************************ 00:07:14.009 END TEST accel_negative_buffers 00:07:14.009 ************************************ 00:07:14.009 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.009 14:25:26 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:14.009 14:25:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.009 14:25:26 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:14.009 14:25:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:14.009 14:25:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.009 14:25:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.009 ************************************ 00:07:14.009 START TEST accel_crc32c 00:07:14.009 ************************************ 00:07:14.009 14:25:26 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:14.009 14:25:26 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:14.009 [2024-07-10 14:25:26.269139] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:14.009 [2024-07-10 14:25:26.269231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77388 ] 00:07:14.268 [2024-07-10 14:25:26.386202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.268 [2024-07-10 14:25:26.402893] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.268 [2024-07-10 14:25:26.439657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.268 14:25:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.647 ************************************ 00:07:15.647 END TEST accel_crc32c 00:07:15.647 ************************************ 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:15.647 14:25:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.647 00:07:15.647 real 0m1.322s 00:07:15.647 user 0m1.153s 00:07:15.647 sys 0m0.074s 00:07:15.647 14:25:27 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.647 14:25:27 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:15.647 14:25:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.647 14:25:27 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:15.647 14:25:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:15.647 14:25:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.647 14:25:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.647 ************************************ 00:07:15.647 START TEST accel_crc32c_C2 00:07:15.647 ************************************ 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:15.647 [2024-07-10 14:25:27.638366] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:15.647 [2024-07-10 14:25:27.638465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77422 ] 00:07:15.647 [2024-07-10 14:25:27.758394] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:15.647 [2024-07-10 14:25:27.777556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.647 [2024-07-10 14:25:27.812875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.647 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.648 14:25:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.023 00:07:17.023 real 0m1.322s 00:07:17.023 user 0m1.164s 00:07:17.023 sys 0m0.065s 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.023 14:25:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:17.023 ************************************ 00:07:17.023 END TEST accel_crc32c_C2 00:07:17.023 ************************************ 00:07:17.023 14:25:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.023 14:25:28 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:17.023 14:25:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:17.023 14:25:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.023 14:25:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.023 ************************************ 00:07:17.023 START TEST accel_copy 00:07:17.023 ************************************ 00:07:17.023 14:25:28 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:17.023 14:25:28 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:17.023 [2024-07-10 14:25:29.006840] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:17.023 [2024-07-10 14:25:29.006936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77452 ] 00:07:17.023 [2024-07-10 14:25:29.124109] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.023 [2024-07-10 14:25:29.145696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.023 [2024-07-10 14:25:29.186860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.023 14:25:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:18.400 14:25:30 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.400 00:07:18.400 real 0m1.330s 00:07:18.400 user 0m0.013s 00:07:18.400 sys 0m0.003s 00:07:18.400 14:25:30 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.400 ************************************ 00:07:18.400 END TEST accel_copy 00:07:18.400 14:25:30 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:18.400 ************************************ 00:07:18.400 14:25:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.400 14:25:30 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:18.400 14:25:30 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:18.400 14:25:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.400 14:25:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.400 ************************************ 00:07:18.400 START TEST accel_fill 00:07:18.400 ************************************ 00:07:18.400 14:25:30 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:18.400 [2024-07-10 14:25:30.387905] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:18.400 [2024-07-10 14:25:30.388012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77485 ] 00:07:18.400 [2024-07-10 14:25:30.508101] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.400 [2024-07-10 14:25:30.528145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.400 [2024-07-10 14:25:30.564804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.400 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.401 14:25:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:19.776 14:25:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.776 00:07:19.776 real 0m1.336s 00:07:19.776 user 0m1.171s 00:07:19.776 sys 0m0.072s 00:07:19.776 14:25:31 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.776 14:25:31 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:19.776 ************************************ 00:07:19.776 END TEST accel_fill 00:07:19.776 ************************************ 00:07:19.776 14:25:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.776 14:25:31 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:19.776 14:25:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:19.776 14:25:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.776 14:25:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.776 ************************************ 00:07:19.776 START TEST accel_copy_crc32c 00:07:19.776 ************************************ 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:19.776 [2024-07-10 14:25:31.769058] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:19.776 [2024-07-10 14:25:31.769179] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77521 ] 00:07:19.776 [2024-07-10 14:25:31.891053] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:19.776 [2024-07-10 14:25:31.909556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.776 [2024-07-10 14:25:31.949507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.776 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.777 14:25:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.152 00:07:21.152 real 0m1.340s 00:07:21.152 user 0m1.164s 00:07:21.152 sys 0m0.081s 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.152 14:25:33 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:21.152 ************************************ 00:07:21.152 END TEST accel_copy_crc32c 00:07:21.152 ************************************ 00:07:21.152 14:25:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.152 14:25:33 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:21.152 14:25:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:21.152 14:25:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.152 14:25:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.152 ************************************ 00:07:21.152 START TEST accel_copy_crc32c_C2 00:07:21.152 ************************************ 00:07:21.152 14:25:33 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:21.152 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.152 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:21.152 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.152 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.152 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:21.152 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:21.152 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:21.153 [2024-07-10 14:25:33.146763] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:21.153 [2024-07-10 14:25:33.146838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77550 ] 00:07:21.153 [2024-07-10 14:25:33.263478] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.153 [2024-07-10 14:25:33.281676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.153 [2024-07-10 14:25:33.317195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.153 14:25:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.526 00:07:22.526 real 0m1.320s 00:07:22.526 user 0m1.154s 00:07:22.526 sys 0m0.071s 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.526 14:25:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:22.526 ************************************ 00:07:22.526 END TEST accel_copy_crc32c_C2 00:07:22.526 ************************************ 00:07:22.526 14:25:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.526 14:25:34 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:22.526 14:25:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:22.526 14:25:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.526 14:25:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.526 ************************************ 00:07:22.526 START TEST accel_dualcast 00:07:22.526 ************************************ 00:07:22.526 14:25:34 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:22.526 [2024-07-10 14:25:34.513069] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:22.526 [2024-07-10 14:25:34.513160] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77579 ] 00:07:22.526 [2024-07-10 14:25:34.637862] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:22.526 [2024-07-10 14:25:34.656820] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.526 [2024-07-10 14:25:34.692310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.527 14:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:23.900 14:25:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.900 00:07:23.900 real 0m1.329s 00:07:23.900 user 0m0.012s 00:07:23.900 sys 0m0.006s 00:07:23.900 14:25:35 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.900 14:25:35 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:23.900 ************************************ 00:07:23.900 END TEST accel_dualcast 00:07:23.900 ************************************ 00:07:23.900 14:25:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.900 14:25:35 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:23.900 14:25:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:23.900 14:25:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.900 14:25:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.900 ************************************ 00:07:23.900 START TEST accel_compare 00:07:23.900 ************************************ 00:07:23.900 14:25:35 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:23.900 14:25:35 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:23.900 [2024-07-10 14:25:35.894887] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:23.900 [2024-07-10 14:25:35.894985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77619 ] 00:07:23.901 [2024-07-10 14:25:36.017698] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.901 [2024-07-10 14:25:36.035536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.901 [2024-07-10 14:25:36.076048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.901 14:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:25.276 14:25:37 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.276 00:07:25.276 real 0m1.336s 00:07:25.276 user 0m1.162s 00:07:25.276 sys 0m0.082s 00:07:25.276 14:25:37 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.276 14:25:37 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:25.276 ************************************ 00:07:25.276 END TEST accel_compare 00:07:25.276 ************************************ 00:07:25.276 14:25:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.276 14:25:37 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:25.276 14:25:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:25.276 14:25:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.276 14:25:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.276 ************************************ 00:07:25.276 START TEST accel_xor 00:07:25.276 ************************************ 00:07:25.276 14:25:37 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:25.276 [2024-07-10 14:25:37.275832] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:25.276 [2024-07-10 14:25:37.275928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77648 ] 00:07:25.276 [2024-07-10 14:25:37.395994] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.276 [2024-07-10 14:25:37.409905] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.276 [2024-07-10 14:25:37.447979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.276 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.277 14:25:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:26.653 14:25:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.653 00:07:26.653 real 0m1.327s 00:07:26.653 user 0m0.012s 00:07:26.653 sys 0m0.003s 00:07:26.653 14:25:38 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.653 14:25:38 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:26.653 ************************************ 00:07:26.653 END TEST accel_xor 00:07:26.653 ************************************ 00:07:26.653 14:25:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.653 14:25:38 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:26.653 14:25:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:26.654 14:25:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.654 14:25:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.654 ************************************ 00:07:26.654 START TEST accel_xor 00:07:26.654 ************************************ 00:07:26.654 14:25:38 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:26.654 [2024-07-10 14:25:38.640950] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:26.654 [2024-07-10 14:25:38.641036] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77677 ] 00:07:26.654 [2024-07-10 14:25:38.757084] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.654 [2024-07-10 14:25:38.772218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.654 [2024-07-10 14:25:38.809230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 14:25:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:28.031 14:25:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.031 00:07:28.031 real 0m1.317s 00:07:28.031 user 0m0.011s 00:07:28.031 sys 0m0.004s 00:07:28.031 14:25:39 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.031 ************************************ 00:07:28.031 END TEST accel_xor 00:07:28.031 ************************************ 00:07:28.031 14:25:39 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:28.031 14:25:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.031 14:25:39 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:28.031 14:25:39 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:28.031 14:25:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.031 14:25:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.031 ************************************ 00:07:28.031 START TEST accel_dif_verify 00:07:28.031 ************************************ 00:07:28.031 14:25:39 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:28.031 14:25:39 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:28.031 [2024-07-10 14:25:40.009856] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:28.031 [2024-07-10 14:25:40.009960] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77717 ] 00:07:28.031 [2024-07-10 14:25:40.126779] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:28.031 [2024-07-10 14:25:40.144852] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.031 [2024-07-10 14:25:40.191186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.031 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.031 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.031 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.031 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.031 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.031 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.031 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.031 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.031 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:28.031 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.031 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.031 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.032 14:25:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:29.410 14:25:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.410 00:07:29.410 real 0m1.336s 00:07:29.410 user 0m1.168s 00:07:29.410 sys 0m0.080s 00:07:29.410 14:25:41 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.410 14:25:41 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:29.410 ************************************ 00:07:29.410 END TEST accel_dif_verify 00:07:29.410 ************************************ 00:07:29.410 14:25:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.410 14:25:41 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:29.410 14:25:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:29.410 14:25:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.410 14:25:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.410 ************************************ 00:07:29.410 START TEST accel_dif_generate 00:07:29.410 ************************************ 00:07:29.410 14:25:41 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:29.410 [2024-07-10 14:25:41.389513] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:29.410 [2024-07-10 14:25:41.389600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77746 ] 00:07:29.410 [2024-07-10 14:25:41.508897] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.410 [2024-07-10 14:25:41.529263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.410 [2024-07-10 14:25:41.568981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.410 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.411 14:25:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:30.786 14:25:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.786 00:07:30.786 real 0m1.332s 00:07:30.786 user 0m1.167s 00:07:30.786 sys 0m0.075s 00:07:30.786 14:25:42 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.786 ************************************ 00:07:30.786 END TEST accel_dif_generate 00:07:30.786 ************************************ 00:07:30.786 14:25:42 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:30.786 14:25:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.786 14:25:42 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:30.786 14:25:42 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:30.786 14:25:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.786 14:25:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.786 ************************************ 00:07:30.786 START TEST accel_dif_generate_copy 00:07:30.786 ************************************ 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:30.786 [2024-07-10 14:25:42.775870] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:30.786 [2024-07-10 14:25:42.776017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77775 ] 00:07:30.786 [2024-07-10 14:25:42.908060] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.786 [2024-07-10 14:25:42.924326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.786 [2024-07-10 14:25:42.959835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:30.786 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:30.786 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:30.786 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.786 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 14:25:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.159 ************************************ 00:07:32.159 END TEST accel_dif_generate_copy 00:07:32.159 ************************************ 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.159 00:07:32.159 real 0m1.342s 00:07:32.159 user 0m1.168s 00:07:32.159 sys 0m0.080s 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.159 14:25:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:32.159 14:25:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.159 14:25:44 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:32.159 14:25:44 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.159 14:25:44 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:32.159 14:25:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.159 14:25:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.159 ************************************ 00:07:32.159 START TEST accel_comp 00:07:32.159 ************************************ 00:07:32.159 14:25:44 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:32.159 [2024-07-10 14:25:44.155914] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:32.159 [2024-07-10 14:25:44.155998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77815 ] 00:07:32.159 [2024-07-10 14:25:44.272861] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.159 [2024-07-10 14:25:44.289890] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.159 [2024-07-10 14:25:44.338975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.159 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.160 14:25:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:33.533 14:25:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.533 00:07:33.533 real 0m1.367s 00:07:33.533 user 0m1.185s 00:07:33.533 sys 0m0.085s 00:07:33.533 14:25:45 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.533 14:25:45 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:33.533 ************************************ 00:07:33.533 END TEST accel_comp 00:07:33.533 ************************************ 00:07:33.533 14:25:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.533 14:25:45 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.533 14:25:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:33.533 14:25:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.533 14:25:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.533 ************************************ 00:07:33.533 START TEST accel_decomp 00:07:33.533 ************************************ 00:07:33.533 14:25:45 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:33.533 [2024-07-10 14:25:45.560134] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:33.533 [2024-07-10 14:25:45.560224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77844 ] 00:07:33.533 [2024-07-10 14:25:45.680113] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:33.533 [2024-07-10 14:25:45.698571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.533 [2024-07-10 14:25:45.739894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.533 14:25:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.906 ************************************ 00:07:34.906 END TEST accel_decomp 00:07:34.906 ************************************ 00:07:34.906 14:25:46 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.906 00:07:34.906 real 0m1.341s 00:07:34.906 user 0m1.166s 00:07:34.906 sys 0m0.080s 00:07:34.906 14:25:46 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.906 14:25:46 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:34.906 14:25:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.906 14:25:46 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.906 14:25:46 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:34.906 14:25:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.906 14:25:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.906 ************************************ 00:07:34.906 START TEST accel_decomp_full 00:07:34.906 ************************************ 00:07:34.906 14:25:46 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:34.906 14:25:46 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:34.906 [2024-07-10 14:25:46.946396] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:34.906 [2024-07-10 14:25:46.946497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77873 ] 00:07:34.906 [2024-07-10 14:25:47.066584] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.906 [2024-07-10 14:25:47.081271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.906 [2024-07-10 14:25:47.117619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.906 14:25:47 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.907 14:25:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.281 14:25:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:36.281 14:25:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:36.281 14:25:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:36.281 14:25:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.281 14:25:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:36.281 14:25:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:36.281 14:25:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:36.282 ************************************ 00:07:36.282 END TEST accel_decomp_full 00:07:36.282 ************************************ 00:07:36.282 14:25:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.282 00:07:36.282 real 0m1.340s 00:07:36.282 user 0m1.171s 00:07:36.282 sys 0m0.071s 00:07:36.282 14:25:48 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.282 14:25:48 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:36.282 14:25:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.282 14:25:48 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:36.282 14:25:48 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:36.282 14:25:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.282 14:25:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.282 ************************************ 00:07:36.282 START TEST accel_decomp_mcore 00:07:36.282 ************************************ 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:36.282 [2024-07-10 14:25:48.332879] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:36.282 [2024-07-10 14:25:48.332962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77915 ] 00:07:36.282 [2024-07-10 14:25:48.451338] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.282 [2024-07-10 14:25:48.471750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.282 [2024-07-10 14:25:48.516664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.282 [2024-07-10 14:25:48.516755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.282 [2024-07-10 14:25:48.516839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.282 [2024-07-10 14:25:48.516838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:36.282 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.283 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.283 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.283 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.283 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.283 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.283 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.283 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.283 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.283 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.283 14:25:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.657 00:07:37.657 real 0m1.375s 00:07:37.657 user 0m4.423s 00:07:37.657 sys 0m0.096s 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.657 ************************************ 00:07:37.657 END TEST accel_decomp_mcore 00:07:37.657 ************************************ 00:07:37.657 14:25:49 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:37.657 14:25:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.657 14:25:49 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.657 14:25:49 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:37.657 14:25:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.657 14:25:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.657 ************************************ 00:07:37.657 START TEST accel_decomp_full_mcore 00:07:37.657 ************************************ 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:37.657 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:37.657 [2024-07-10 14:25:49.739693] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:37.657 [2024-07-10 14:25:49.739783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77947 ] 00:07:37.657 [2024-07-10 14:25:49.857635] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:37.657 [2024-07-10 14:25:49.872773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.658 [2024-07-10 14:25:49.921990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.658 [2024-07-10 14:25:49.922070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.658 [2024-07-10 14:25:49.922151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.658 [2024-07-10 14:25:49.922141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.916 14:25:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.849 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.850 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.850 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.850 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.850 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.850 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.850 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:38.850 14:25:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.850 00:07:38.850 real 0m1.403s 00:07:38.850 user 0m4.504s 00:07:38.850 sys 0m0.113s 00:07:38.850 14:25:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.850 ************************************ 00:07:38.850 END TEST accel_decomp_full_mcore 00:07:38.850 ************************************ 00:07:38.850 14:25:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:39.107 14:25:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.107 14:25:51 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:39.107 14:25:51 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:39.107 14:25:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.107 14:25:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.107 ************************************ 00:07:39.107 START TEST accel_decomp_mthread 00:07:39.107 ************************************ 00:07:39.107 14:25:51 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:39.107 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:39.107 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:39.107 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.108 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:39.108 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.108 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:39.108 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:39.108 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.108 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.108 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.108 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.108 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.108 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:39.108 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:39.108 [2024-07-10 14:25:51.188550] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:39.108 [2024-07-10 14:25:51.188686] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77985 ] 00:07:39.108 [2024-07-10 14:25:51.313852] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.108 [2024-07-10 14:25:51.332620] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.108 [2024-07-10 14:25:51.371654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.365 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.366 14:25:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.299 00:07:40.299 real 0m1.369s 00:07:40.299 user 0m1.184s 00:07:40.299 sys 0m0.085s 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.299 14:25:52 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:40.299 ************************************ 00:07:40.299 END TEST accel_decomp_mthread 00:07:40.299 ************************************ 00:07:40.299 14:25:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.299 14:25:52 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.299 14:25:52 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:40.299 14:25:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.299 14:25:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.299 ************************************ 00:07:40.299 START TEST accel_decomp_full_mthread 00:07:40.299 ************************************ 00:07:40.299 14:25:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.299 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:40.299 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:40.299 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.299 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.299 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.300 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.300 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:40.300 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.300 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.300 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.300 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.300 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.300 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:40.300 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:40.558 [2024-07-10 14:25:52.590716] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:40.558 [2024-07-10 14:25:52.590817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78019 ] 00:07:40.558 [2024-07-10 14:25:52.707715] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.558 [2024-07-10 14:25:52.729978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.558 [2024-07-10 14:25:52.773656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.558 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.559 14:25:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.931 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.932 ************************************ 00:07:41.932 END TEST accel_decomp_full_mthread 00:07:41.932 ************************************ 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.932 00:07:41.932 real 0m1.382s 00:07:41.932 user 0m1.203s 00:07:41.932 sys 0m0.085s 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.932 14:25:53 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:41.932 14:25:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.932 14:25:53 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:41.932 14:25:53 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:41.932 14:25:53 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:41.932 14:25:53 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:41.932 14:25:53 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.932 14:25:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.932 14:25:53 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.932 14:25:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.932 14:25:53 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.932 14:25:53 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.932 14:25:53 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.932 14:25:53 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:41.932 14:25:53 accel -- accel/accel.sh@41 -- # jq -r . 00:07:41.932 ************************************ 00:07:41.932 START TEST accel_dif_functional_tests 00:07:41.932 ************************************ 00:07:41.932 14:25:53 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:41.932 [2024-07-10 14:25:54.044382] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:41.932 [2024-07-10 14:25:54.044476] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78049 ] 00:07:41.932 [2024-07-10 14:25:54.162830] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.932 [2024-07-10 14:25:54.181146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.190 [2024-07-10 14:25:54.220931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.190 [2024-07-10 14:25:54.221009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.190 [2024-07-10 14:25:54.221015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.190 00:07:42.190 00:07:42.190 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.190 http://cunit.sourceforge.net/ 00:07:42.190 00:07:42.190 00:07:42.190 Suite: accel_dif 00:07:42.190 Test: verify: DIF generated, GUARD check ...passed 00:07:42.190 Test: verify: DIF generated, APPTAG check ...passed 00:07:42.190 Test: verify: DIF generated, REFTAG check ...passed 00:07:42.190 Test: verify: DIF not generated, GUARD check ...passed 00:07:42.190 Test: verify: DIF not generated, APPTAG check ...[2024-07-10 14:25:54.271822] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:42.190 passed 00:07:42.190 Test: verify: DIF not generated, REFTAG check ...[2024-07-10 14:25:54.271903] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:42.190 [2024-07-10 14:25:54.272050] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:42.190 passed 00:07:42.190 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:42.190 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:42.190 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-10 14:25:54.272114] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:42.190 passed 00:07:42.190 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:42.190 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:42.190 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:42.190 Test: verify copy: DIF generated, GUARD check ...[2024-07-10 14:25:54.272448] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:42.190 passed 00:07:42.190 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:42.190 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:42.190 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:42.190 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-10 14:25:54.272748] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:42.190 [2024-07-10 14:25:54.272848] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:42.190 passed 00:07:42.190 Test: verify copy: DIF not generated, REFTAG check ...passed 00:07:42.190 Test: generate copy: DIF generated, GUARD check ...passed 00:07:42.190 Test: generate copy: DIF generated, APTTAG check ...[2024-07-10 14:25:54.272878] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:42.190 passed 00:07:42.190 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:42.191 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:42.191 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:42.191 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:42.191 Test: generate copy: iovecs-len validate ...[2024-07-10 14:25:54.273496] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:42.191 passed 00:07:42.191 Test: generate copy: buffer alignment validate ...passed 00:07:42.191 00:07:42.191 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.191 suites 1 1 n/a 0 0 00:07:42.191 tests 26 26 26 0 0 00:07:42.191 asserts 115 115 115 0 n/a 00:07:42.191 00:07:42.191 Elapsed time = 0.005 seconds 00:07:42.191 00:07:42.191 real 0m0.412s 00:07:42.191 user 0m0.485s 00:07:42.191 sys 0m0.097s 00:07:42.191 14:25:54 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.191 ************************************ 00:07:42.191 14:25:54 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:42.191 END TEST accel_dif_functional_tests 00:07:42.191 ************************************ 00:07:42.191 14:25:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.191 ************************************ 00:07:42.191 END TEST accel 00:07:42.191 ************************************ 00:07:42.191 00:07:42.191 real 0m30.024s 00:07:42.191 user 0m32.195s 00:07:42.191 sys 0m2.857s 00:07:42.191 14:25:54 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.191 14:25:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.449 14:25:54 -- common/autotest_common.sh@1142 -- # return 0 00:07:42.449 14:25:54 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:42.449 14:25:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.449 14:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.449 14:25:54 -- common/autotest_common.sh@10 -- # set +x 00:07:42.449 ************************************ 00:07:42.449 START TEST accel_rpc 00:07:42.449 ************************************ 00:07:42.449 14:25:54 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:42.449 * Looking for test storage... 00:07:42.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:42.449 14:25:54 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:42.449 14:25:54 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=78119 00:07:42.449 14:25:54 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:42.449 14:25:54 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 78119 00:07:42.449 14:25:54 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 78119 ']' 00:07:42.449 14:25:54 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.449 14:25:54 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.449 14:25:54 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.449 14:25:54 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.449 14:25:54 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.449 [2024-07-10 14:25:54.643431] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:42.449 [2024-07-10 14:25:54.643549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78119 ] 00:07:42.708 [2024-07-10 14:25:54.761651] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:42.708 [2024-07-10 14:25:54.779801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.708 [2024-07-10 14:25:54.830629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.329 14:25:55 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.329 14:25:55 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:43.329 14:25:55 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:43.329 14:25:55 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:43.329 14:25:55 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:43.329 14:25:55 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:43.329 14:25:55 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:43.329 14:25:55 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.329 14:25:55 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.329 14:25:55 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.329 ************************************ 00:07:43.329 START TEST accel_assign_opcode 00:07:43.329 ************************************ 00:07:43.329 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:43.329 14:25:55 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:43.329 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.329 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.329 [2024-07-10 14:25:55.603386] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:43.329 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.329 14:25:55 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:43.329 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.329 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.329 [2024-07-10 14:25:55.611376] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:43.329 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.329 14:25:55 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:43.329 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.329 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.588 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.588 14:25:55 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:43.588 14:25:55 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:43.588 14:25:55 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:43.588 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.588 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.588 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.588 software 00:07:43.588 00:07:43.588 real 0m0.189s 00:07:43.588 user 0m0.052s 00:07:43.588 sys 0m0.009s 00:07:43.588 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.588 14:25:55 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.588 ************************************ 00:07:43.588 END TEST accel_assign_opcode 00:07:43.588 ************************************ 00:07:43.588 14:25:55 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:43.588 14:25:55 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 78119 00:07:43.588 14:25:55 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 78119 ']' 00:07:43.588 14:25:55 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 78119 00:07:43.588 14:25:55 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:43.588 14:25:55 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.588 14:25:55 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78119 00:07:43.588 killing process with pid 78119 00:07:43.588 14:25:55 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:43.588 14:25:55 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:43.588 14:25:55 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78119' 00:07:43.588 14:25:55 accel_rpc -- common/autotest_common.sh@967 -- # kill 78119 00:07:43.588 14:25:55 accel_rpc -- common/autotest_common.sh@972 -- # wait 78119 00:07:43.846 ************************************ 00:07:43.846 END TEST accel_rpc 00:07:43.846 ************************************ 00:07:43.846 00:07:43.846 real 0m1.585s 00:07:43.846 user 0m1.759s 00:07:43.846 sys 0m0.348s 00:07:43.846 14:25:56 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.846 14:25:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.846 14:25:56 -- common/autotest_common.sh@1142 -- # return 0 00:07:43.846 14:25:56 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:43.846 14:25:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.846 14:25:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.846 14:25:56 -- common/autotest_common.sh@10 -- # set +x 00:07:43.846 ************************************ 00:07:43.846 START TEST app_cmdline 00:07:43.846 ************************************ 00:07:43.846 14:25:56 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:44.104 * Looking for test storage... 00:07:44.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:44.104 14:25:56 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:44.104 14:25:56 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=78225 00:07:44.104 14:25:56 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:44.104 14:25:56 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 78225 00:07:44.104 14:25:56 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 78225 ']' 00:07:44.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.104 14:25:56 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.104 14:25:56 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.104 14:25:56 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.104 14:25:56 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.104 14:25:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:44.104 [2024-07-10 14:25:56.261435] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:44.104 [2024-07-10 14:25:56.261583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78225 ] 00:07:44.104 [2024-07-10 14:25:56.388576] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.362 [2024-07-10 14:25:56.403388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.362 [2024-07-10 14:25:56.448084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.362 14:25:56 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.362 14:25:56 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:44.362 14:25:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:44.928 { 00:07:44.928 "fields": { 00:07:44.928 "commit": "9937c0160", 00:07:44.928 "major": 24, 00:07:44.928 "minor": 9, 00:07:44.928 "patch": 0, 00:07:44.928 "suffix": "-pre" 00:07:44.928 }, 00:07:44.928 "version": "SPDK v24.09-pre git sha1 9937c0160" 00:07:44.928 } 00:07:44.928 14:25:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:44.928 14:25:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:44.928 14:25:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:44.928 14:25:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:44.928 14:25:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:44.928 14:25:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:44.928 14:25:56 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.928 14:25:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:44.928 14:25:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:44.928 14:25:57 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.928 14:25:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:44.928 14:25:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:44.928 14:25:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.928 14:25:57 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:44.928 14:25:57 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.928 14:25:57 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.928 14:25:57 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.928 14:25:57 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.928 14:25:57 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.928 14:25:57 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.928 14:25:57 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.928 14:25:57 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.928 14:25:57 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:44.928 14:25:57 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.187 2024/07/10 14:25:57 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:45.187 request: 00:07:45.187 { 00:07:45.187 "method": "env_dpdk_get_mem_stats", 00:07:45.187 "params": {} 00:07:45.187 } 00:07:45.187 Got JSON-RPC error response 00:07:45.187 GoRPCClient: error on JSON-RPC call 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.187 14:25:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 78225 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 78225 ']' 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 78225 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78225 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.187 killing process with pid 78225 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78225' 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@967 -- # kill 78225 00:07:45.187 14:25:57 app_cmdline -- common/autotest_common.sh@972 -- # wait 78225 00:07:45.445 00:07:45.445 real 0m1.521s 00:07:45.445 user 0m2.143s 00:07:45.445 sys 0m0.367s 00:07:45.445 14:25:57 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.445 14:25:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:45.445 ************************************ 00:07:45.445 END TEST app_cmdline 00:07:45.445 ************************************ 00:07:45.445 14:25:57 -- common/autotest_common.sh@1142 -- # return 0 00:07:45.445 14:25:57 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:45.445 14:25:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.445 14:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.445 14:25:57 -- common/autotest_common.sh@10 -- # set +x 00:07:45.445 ************************************ 00:07:45.445 START TEST version 00:07:45.445 ************************************ 00:07:45.445 14:25:57 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:45.704 * Looking for test storage... 00:07:45.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:45.704 14:25:57 version -- app/version.sh@17 -- # get_header_version major 00:07:45.704 14:25:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:45.704 14:25:57 version -- app/version.sh@14 -- # cut -f2 00:07:45.704 14:25:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.704 14:25:57 version -- app/version.sh@17 -- # major=24 00:07:45.704 14:25:57 version -- app/version.sh@18 -- # get_header_version minor 00:07:45.704 14:25:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:45.704 14:25:57 version -- app/version.sh@14 -- # cut -f2 00:07:45.704 14:25:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.704 14:25:57 version -- app/version.sh@18 -- # minor=9 00:07:45.704 14:25:57 version -- app/version.sh@19 -- # get_header_version patch 00:07:45.704 14:25:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:45.704 14:25:57 version -- app/version.sh@14 -- # cut -f2 00:07:45.704 14:25:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.704 14:25:57 version -- app/version.sh@19 -- # patch=0 00:07:45.704 14:25:57 version -- app/version.sh@20 -- # get_header_version suffix 00:07:45.704 14:25:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:45.704 14:25:57 version -- app/version.sh@14 -- # cut -f2 00:07:45.704 14:25:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.704 14:25:57 version -- app/version.sh@20 -- # suffix=-pre 00:07:45.704 14:25:57 version -- app/version.sh@22 -- # version=24.9 00:07:45.704 14:25:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:45.704 14:25:57 version -- app/version.sh@28 -- # version=24.9rc0 00:07:45.704 14:25:57 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:45.704 14:25:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:45.704 14:25:57 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:45.704 14:25:57 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:45.704 00:07:45.704 real 0m0.132s 00:07:45.704 user 0m0.079s 00:07:45.704 sys 0m0.080s 00:07:45.704 14:25:57 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.704 14:25:57 version -- common/autotest_common.sh@10 -- # set +x 00:07:45.704 ************************************ 00:07:45.704 END TEST version 00:07:45.704 ************************************ 00:07:45.704 14:25:57 -- common/autotest_common.sh@1142 -- # return 0 00:07:45.704 14:25:57 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:45.704 14:25:57 -- spdk/autotest.sh@198 -- # uname -s 00:07:45.704 14:25:57 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:45.704 14:25:57 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:45.704 14:25:57 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:45.704 14:25:57 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:45.704 14:25:57 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:45.704 14:25:57 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:45.704 14:25:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:45.704 14:25:57 -- common/autotest_common.sh@10 -- # set +x 00:07:45.704 14:25:57 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:45.704 14:25:57 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:45.704 14:25:57 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:45.704 14:25:57 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:45.704 14:25:57 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:45.704 14:25:57 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:45.704 14:25:57 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:45.704 14:25:57 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:45.704 14:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.704 14:25:57 -- common/autotest_common.sh@10 -- # set +x 00:07:45.704 ************************************ 00:07:45.704 START TEST nvmf_tcp 00:07:45.704 ************************************ 00:07:45.704 14:25:57 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:45.704 * Looking for test storage... 00:07:45.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:45.704 14:25:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:45.704 14:25:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:45.704 14:25:57 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:45.704 14:25:57 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.705 14:25:57 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.705 14:25:57 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.705 14:25:57 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.705 14:25:57 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.705 14:25:57 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.705 14:25:57 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.705 14:25:57 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:45.705 14:25:57 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:45.705 14:25:57 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:45.705 14:25:57 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.705 14:25:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.964 14:25:57 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:45.964 14:25:57 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:45.964 14:25:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:45.964 14:25:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.964 14:25:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.964 ************************************ 00:07:45.964 START TEST nvmf_example 00:07:45.964 ************************************ 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:45.964 * Looking for test storage... 00:07:45.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:45.964 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:45.965 Cannot find device "nvmf_init_br" 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:45.965 Cannot find device "nvmf_tgt_br" 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:45.965 Cannot find device "nvmf_tgt_br2" 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:45.965 Cannot find device "nvmf_init_br" 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:45.965 Cannot find device "nvmf_tgt_br" 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:45.965 Cannot find device "nvmf_tgt_br2" 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:45.965 Cannot find device "nvmf_br" 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:45.965 Cannot find device "nvmf_init_if" 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:45.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:45.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:45.965 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:46.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:07:46.223 00:07:46.223 --- 10.0.0.2 ping statistics --- 00:07:46.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.223 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:46.223 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:46.223 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:07:46.223 00:07:46.223 --- 10.0.0.3 ping statistics --- 00:07:46.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.223 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:46.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:07:46.223 00:07:46.223 --- 10.0.0.1 ping statistics --- 00:07:46.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.223 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.223 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:46.224 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:46.224 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=78555 00:07:46.224 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:46.224 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:46.224 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 78555 00:07:46.224 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 78555 ']' 00:07:46.224 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.224 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.224 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.224 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.224 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.790 14:25:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.791 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:46.791 14:25:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:58.999 Initializing NVMe Controllers 00:07:58.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:58.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:58.999 Initialization complete. Launching workers. 00:07:58.999 ======================================================== 00:07:58.999 Latency(us) 00:07:58.999 Device Information : IOPS MiB/s Average min max 00:07:58.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13185.90 51.51 4855.94 899.86 23325.47 00:07:58.999 ======================================================== 00:07:58.999 Total : 13185.90 51.51 4855.94 899.86 23325.47 00:07:58.999 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:58.999 rmmod nvme_tcp 00:07:58.999 rmmod nvme_fabrics 00:07:58.999 rmmod nvme_keyring 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 78555 ']' 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 78555 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 78555 ']' 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 78555 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78555 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:58.999 killing process with pid 78555 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78555' 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 78555 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 78555 00:07:58.999 nvmf threads initialize successfully 00:07:58.999 bdev subsystem init successfully 00:07:58.999 created a nvmf target service 00:07:58.999 create targets's poll groups done 00:07:58.999 all subsystems of target started 00:07:58.999 nvmf target is running 00:07:58.999 all subsystems of target stopped 00:07:58.999 destroy targets's poll groups done 00:07:58.999 destroyed the nvmf target service 00:07:58.999 bdev subsystem finish successfully 00:07:58.999 nvmf threads destroy successfully 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:58.999 00:07:58.999 real 0m11.555s 00:07:58.999 user 0m41.278s 00:07:58.999 sys 0m1.873s 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.999 14:26:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:58.999 ************************************ 00:07:58.999 END TEST nvmf_example 00:07:58.999 ************************************ 00:07:58.999 14:26:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:58.999 14:26:09 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:58.999 14:26:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:58.999 14:26:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.999 14:26:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:58.999 ************************************ 00:07:58.999 START TEST nvmf_filesystem 00:07:58.999 ************************************ 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:58.999 * Looking for test storage... 00:07:58.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:58.999 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:59.000 #define SPDK_CONFIG_H 00:07:59.000 #define SPDK_CONFIG_APPS 1 00:07:59.000 #define SPDK_CONFIG_ARCH native 00:07:59.000 #undef SPDK_CONFIG_ASAN 00:07:59.000 #define SPDK_CONFIG_AVAHI 1 00:07:59.000 #undef SPDK_CONFIG_CET 00:07:59.000 #define SPDK_CONFIG_COVERAGE 1 00:07:59.000 #define SPDK_CONFIG_CROSS_PREFIX 00:07:59.000 #undef SPDK_CONFIG_CRYPTO 00:07:59.000 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:59.000 #undef SPDK_CONFIG_CUSTOMOCF 00:07:59.000 #undef SPDK_CONFIG_DAOS 00:07:59.000 #define SPDK_CONFIG_DAOS_DIR 00:07:59.000 #define SPDK_CONFIG_DEBUG 1 00:07:59.000 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:59.000 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:07:59.000 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:07:59.000 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:07:59.000 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:59.000 #undef SPDK_CONFIG_DPDK_UADK 00:07:59.000 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:59.000 #define SPDK_CONFIG_EXAMPLES 1 00:07:59.000 #undef SPDK_CONFIG_FC 00:07:59.000 #define SPDK_CONFIG_FC_PATH 00:07:59.000 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:59.000 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:59.000 #undef SPDK_CONFIG_FUSE 00:07:59.000 #undef SPDK_CONFIG_FUZZER 00:07:59.000 #define SPDK_CONFIG_FUZZER_LIB 00:07:59.000 #define SPDK_CONFIG_GOLANG 1 00:07:59.000 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:59.000 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:59.000 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:59.000 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:59.000 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:59.000 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:59.000 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:59.000 #define SPDK_CONFIG_IDXD 1 00:07:59.000 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:59.000 #undef SPDK_CONFIG_IPSEC_MB 00:07:59.000 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:59.000 #define SPDK_CONFIG_ISAL 1 00:07:59.000 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:59.000 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:59.000 #define SPDK_CONFIG_LIBDIR 00:07:59.000 #undef SPDK_CONFIG_LTO 00:07:59.000 #define SPDK_CONFIG_MAX_LCORES 128 00:07:59.000 #define SPDK_CONFIG_NVME_CUSE 1 00:07:59.000 #undef SPDK_CONFIG_OCF 00:07:59.000 #define SPDK_CONFIG_OCF_PATH 00:07:59.000 #define SPDK_CONFIG_OPENSSL_PATH 00:07:59.000 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:59.000 #define SPDK_CONFIG_PGO_DIR 00:07:59.000 #undef SPDK_CONFIG_PGO_USE 00:07:59.000 #define SPDK_CONFIG_PREFIX /usr/local 00:07:59.000 #undef SPDK_CONFIG_RAID5F 00:07:59.000 #undef SPDK_CONFIG_RBD 00:07:59.000 #define SPDK_CONFIG_RDMA 1 00:07:59.000 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:59.000 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:59.000 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:59.000 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:59.000 #define SPDK_CONFIG_SHARED 1 00:07:59.000 #undef SPDK_CONFIG_SMA 00:07:59.000 #define SPDK_CONFIG_TESTS 1 00:07:59.000 #undef SPDK_CONFIG_TSAN 00:07:59.000 #define SPDK_CONFIG_UBLK 1 00:07:59.000 #define SPDK_CONFIG_UBSAN 1 00:07:59.000 #undef SPDK_CONFIG_UNIT_TESTS 00:07:59.000 #undef SPDK_CONFIG_URING 00:07:59.000 #define SPDK_CONFIG_URING_PATH 00:07:59.000 #undef SPDK_CONFIG_URING_ZNS 00:07:59.000 #define SPDK_CONFIG_USDT 1 00:07:59.000 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:59.000 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:59.000 #undef SPDK_CONFIG_VFIO_USER 00:07:59.000 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:59.000 #define SPDK_CONFIG_VHOST 1 00:07:59.000 #define SPDK_CONFIG_VIRTIO 1 00:07:59.000 #undef SPDK_CONFIG_VTUNE 00:07:59.000 #define SPDK_CONFIG_VTUNE_DIR 00:07:59.000 #define SPDK_CONFIG_WERROR 1 00:07:59.000 #define SPDK_CONFIG_WPDK_DIR 00:07:59.000 #undef SPDK_CONFIG_XNVME 00:07:59.000 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.000 14:26:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /home/vagrant/spdk_repo/dpdk/build 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:59.001 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 78790 ]] 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 78790 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:59.002 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.3Y3753 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.3Y3753/tests/target /tmp/spdk.3Y3753 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264516608 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13070200832 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5976395776 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13070200832 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5976395776 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267740160 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267895808 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=155648 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=93984661504 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5718118400 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:59.003 * Looking for test storage... 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13070200832 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:59.003 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:59.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:59.004 Cannot find device "nvmf_tgt_br" 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:59.004 Cannot find device "nvmf_tgt_br2" 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:59.004 Cannot find device "nvmf_tgt_br" 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:59.004 Cannot find device "nvmf_tgt_br2" 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:59.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:59.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:59.004 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:59.005 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:59.005 14:26:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:59.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:07:59.005 00:07:59.005 --- 10.0.0.2 ping statistics --- 00:07:59.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.005 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:59.005 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:59.005 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:07:59.005 00:07:59.005 --- 10.0.0.3 ping statistics --- 00:07:59.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.005 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:59.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:07:59.005 00:07:59.005 --- 10.0.0.1 ping statistics --- 00:07:59.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.005 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.005 ************************************ 00:07:59.005 START TEST nvmf_filesystem_no_in_capsule 00:07:59.005 ************************************ 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=78943 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 78943 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 78943 ']' 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.005 [2024-07-10 14:26:10.288189] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:07:59.005 [2024-07-10 14:26:10.288333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.005 [2024-07-10 14:26:10.417333] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:59.005 [2024-07-10 14:26:10.432563] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.005 [2024-07-10 14:26:10.470784] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.005 [2024-07-10 14:26:10.471017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.005 [2024-07-10 14:26:10.471161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.005 [2024-07-10 14:26:10.471372] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.005 [2024-07-10 14:26:10.471477] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.005 [2024-07-10 14:26:10.471656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.005 [2024-07-10 14:26:10.473318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.005 [2024-07-10 14:26:10.473420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.005 [2024-07-10 14:26:10.473429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.005 [2024-07-10 14:26:10.594912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.005 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.006 Malloc1 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.006 [2024-07-10 14:26:10.723952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:59.006 { 00:07:59.006 "aliases": [ 00:07:59.006 "1541f07d-d637-40c4-a108-f597d6be4bb8" 00:07:59.006 ], 00:07:59.006 "assigned_rate_limits": { 00:07:59.006 "r_mbytes_per_sec": 0, 00:07:59.006 "rw_ios_per_sec": 0, 00:07:59.006 "rw_mbytes_per_sec": 0, 00:07:59.006 "w_mbytes_per_sec": 0 00:07:59.006 }, 00:07:59.006 "block_size": 512, 00:07:59.006 "claim_type": "exclusive_write", 00:07:59.006 "claimed": true, 00:07:59.006 "driver_specific": {}, 00:07:59.006 "memory_domains": [ 00:07:59.006 { 00:07:59.006 "dma_device_id": "system", 00:07:59.006 "dma_device_type": 1 00:07:59.006 }, 00:07:59.006 { 00:07:59.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.006 "dma_device_type": 2 00:07:59.006 } 00:07:59.006 ], 00:07:59.006 "name": "Malloc1", 00:07:59.006 "num_blocks": 1048576, 00:07:59.006 "product_name": "Malloc disk", 00:07:59.006 "supported_io_types": { 00:07:59.006 "abort": true, 00:07:59.006 "compare": false, 00:07:59.006 "compare_and_write": false, 00:07:59.006 "copy": true, 00:07:59.006 "flush": true, 00:07:59.006 "get_zone_info": false, 00:07:59.006 "nvme_admin": false, 00:07:59.006 "nvme_io": false, 00:07:59.006 "nvme_io_md": false, 00:07:59.006 "nvme_iov_md": false, 00:07:59.006 "read": true, 00:07:59.006 "reset": true, 00:07:59.006 "seek_data": false, 00:07:59.006 "seek_hole": false, 00:07:59.006 "unmap": true, 00:07:59.006 "write": true, 00:07:59.006 "write_zeroes": true, 00:07:59.006 "zcopy": true, 00:07:59.006 "zone_append": false, 00:07:59.006 "zone_management": false 00:07:59.006 }, 00:07:59.006 "uuid": "1541f07d-d637-40c4-a108-f597d6be4bb8", 00:07:59.006 "zoned": false 00:07:59.006 } 00:07:59.006 ]' 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:59.006 14:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:59.006 14:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:59.006 14:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:59.006 14:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:59.006 14:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:59.006 14:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:00.907 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:01.164 14:26:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.096 ************************************ 00:08:02.096 START TEST filesystem_ext4 00:08:02.096 ************************************ 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:02.096 mke2fs 1.46.5 (30-Dec-2021) 00:08:02.096 Discarding device blocks: 0/522240 done 00:08:02.096 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:02.096 Filesystem UUID: 145340ab-9290-458f-88f1-c0a677f51f36 00:08:02.096 Superblock backups stored on blocks: 00:08:02.096 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:02.096 00:08:02.096 Allocating group tables: 0/64 done 00:08:02.096 Writing inode tables: 0/64 done 00:08:02.096 Creating journal (8192 blocks): done 00:08:02.096 Writing superblocks and filesystem accounting information: 0/64 done 00:08:02.096 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:02.096 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:02.353 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:02.353 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:02.353 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:02.353 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:02.353 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:02.353 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:02.353 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 78943 00:08:02.353 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:02.353 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:02.353 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:02.353 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:02.353 ************************************ 00:08:02.353 END TEST filesystem_ext4 00:08:02.353 ************************************ 00:08:02.353 00:08:02.353 real 0m0.401s 00:08:02.353 user 0m0.017s 00:08:02.353 sys 0m0.058s 00:08:02.353 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.353 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.610 ************************************ 00:08:02.610 START TEST filesystem_btrfs 00:08:02.610 ************************************ 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:02.610 btrfs-progs v6.6.2 00:08:02.610 See https://btrfs.readthedocs.io for more information. 00:08:02.610 00:08:02.610 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:02.610 NOTE: several default settings have changed in version 5.15, please make sure 00:08:02.610 this does not affect your deployments: 00:08:02.610 - DUP for metadata (-m dup) 00:08:02.610 - enabled no-holes (-O no-holes) 00:08:02.610 - enabled free-space-tree (-R free-space-tree) 00:08:02.610 00:08:02.610 Label: (null) 00:08:02.610 UUID: 7df398e5-d38b-4275-847a-dfe141e9d92f 00:08:02.610 Node size: 16384 00:08:02.610 Sector size: 4096 00:08:02.610 Filesystem size: 510.00MiB 00:08:02.610 Block group profiles: 00:08:02.610 Data: single 8.00MiB 00:08:02.610 Metadata: DUP 32.00MiB 00:08:02.610 System: DUP 8.00MiB 00:08:02.610 SSD detected: yes 00:08:02.610 Zoned device: no 00:08:02.610 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:02.610 Runtime features: free-space-tree 00:08:02.610 Checksum: crc32c 00:08:02.610 Number of devices: 1 00:08:02.610 Devices: 00:08:02.610 ID SIZE PATH 00:08:02.610 1 510.00MiB /dev/nvme0n1p1 00:08:02.610 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:02.610 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:02.611 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:02.611 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:02.611 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:02.611 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:02.611 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:02.611 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 78943 00:08:02.611 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:02.611 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:02.611 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:02.611 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:02.611 ************************************ 00:08:02.611 END TEST filesystem_btrfs 00:08:02.611 ************************************ 00:08:02.611 00:08:02.611 real 0m0.215s 00:08:02.611 user 0m0.022s 00:08:02.611 sys 0m0.062s 00:08:02.611 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.611 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.867 ************************************ 00:08:02.867 START TEST filesystem_xfs 00:08:02.867 ************************************ 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:02.867 14:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:02.867 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:02.867 = sectsz=512 attr=2, projid32bit=1 00:08:02.867 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:02.867 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:02.867 data = bsize=4096 blocks=130560, imaxpct=25 00:08:02.867 = sunit=0 swidth=0 blks 00:08:02.867 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:02.867 log =internal log bsize=4096 blocks=16384, version=2 00:08:02.867 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:02.867 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:03.431 Discarding blocks...Done. 00:08:03.431 14:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:03.431 14:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 78943 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:05.957 ************************************ 00:08:05.957 END TEST filesystem_xfs 00:08:05.957 ************************************ 00:08:05.957 00:08:05.957 real 0m3.032s 00:08:05.957 user 0m0.021s 00:08:05.957 sys 0m0.050s 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:05.957 14:26:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:05.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 78943 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 78943 ']' 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 78943 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78943 00:08:05.957 killing process with pid 78943 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:05.957 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:05.958 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78943' 00:08:05.958 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 78943 00:08:05.958 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 78943 00:08:06.215 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:06.215 ************************************ 00:08:06.215 END TEST nvmf_filesystem_no_in_capsule 00:08:06.215 ************************************ 00:08:06.215 00:08:06.215 real 0m8.191s 00:08:06.215 user 0m30.575s 00:08:06.215 sys 0m1.552s 00:08:06.215 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.215 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.215 14:26:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:06.215 14:26:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:06.215 14:26:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:06.215 14:26:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.215 14:26:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.215 ************************************ 00:08:06.215 START TEST nvmf_filesystem_in_capsule 00:08:06.215 ************************************ 00:08:06.215 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:06.215 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:06.215 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:06.216 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.216 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.216 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.216 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=79235 00:08:06.216 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.216 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 79235 00:08:06.216 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 79235 ']' 00:08:06.216 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.216 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.216 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.216 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.216 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.473 [2024-07-10 14:26:18.515636] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:08:06.473 [2024-07-10 14:26:18.515763] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.473 [2024-07-10 14:26:18.645726] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.473 [2024-07-10 14:26:18.661804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.473 [2024-07-10 14:26:18.700614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.473 [2024-07-10 14:26:18.700692] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.473 [2024-07-10 14:26:18.700711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.473 [2024-07-10 14:26:18.700725] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.473 [2024-07-10 14:26:18.700738] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.473 [2024-07-10 14:26:18.704326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.473 [2024-07-10 14:26:18.704439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.473 [2024-07-10 14:26:18.704525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.473 [2024-07-10 14:26:18.704540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.731 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.731 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:06.731 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.731 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.731 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.732 [2024-07-10 14:26:18.839526] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.732 Malloc1 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.732 [2024-07-10 14:26:18.958092] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:06.732 { 00:08:06.732 "aliases": [ 00:08:06.732 "95dd70b4-1532-47f2-a6b9-69d0dff6b1c7" 00:08:06.732 ], 00:08:06.732 "assigned_rate_limits": { 00:08:06.732 "r_mbytes_per_sec": 0, 00:08:06.732 "rw_ios_per_sec": 0, 00:08:06.732 "rw_mbytes_per_sec": 0, 00:08:06.732 "w_mbytes_per_sec": 0 00:08:06.732 }, 00:08:06.732 "block_size": 512, 00:08:06.732 "claim_type": "exclusive_write", 00:08:06.732 "claimed": true, 00:08:06.732 "driver_specific": {}, 00:08:06.732 "memory_domains": [ 00:08:06.732 { 00:08:06.732 "dma_device_id": "system", 00:08:06.732 "dma_device_type": 1 00:08:06.732 }, 00:08:06.732 { 00:08:06.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.732 "dma_device_type": 2 00:08:06.732 } 00:08:06.732 ], 00:08:06.732 "name": "Malloc1", 00:08:06.732 "num_blocks": 1048576, 00:08:06.732 "product_name": "Malloc disk", 00:08:06.732 "supported_io_types": { 00:08:06.732 "abort": true, 00:08:06.732 "compare": false, 00:08:06.732 "compare_and_write": false, 00:08:06.732 "copy": true, 00:08:06.732 "flush": true, 00:08:06.732 "get_zone_info": false, 00:08:06.732 "nvme_admin": false, 00:08:06.732 "nvme_io": false, 00:08:06.732 "nvme_io_md": false, 00:08:06.732 "nvme_iov_md": false, 00:08:06.732 "read": true, 00:08:06.732 "reset": true, 00:08:06.732 "seek_data": false, 00:08:06.732 "seek_hole": false, 00:08:06.732 "unmap": true, 00:08:06.732 "write": true, 00:08:06.732 "write_zeroes": true, 00:08:06.732 "zcopy": true, 00:08:06.732 "zone_append": false, 00:08:06.732 "zone_management": false 00:08:06.732 }, 00:08:06.732 "uuid": "95dd70b4-1532-47f2-a6b9-69d0dff6b1c7", 00:08:06.732 "zoned": false 00:08:06.732 } 00:08:06.732 ]' 00:08:06.732 14:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:06.990 14:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:06.990 14:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:06.990 14:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:06.990 14:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:06.990 14:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:06.990 14:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:06.990 14:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:06.990 14:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:06.990 14:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:06.990 14:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:06.990 14:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:06.990 14:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:09.517 14:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.449 ************************************ 00:08:10.449 START TEST filesystem_in_capsule_ext4 00:08:10.449 ************************************ 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:10.449 mke2fs 1.46.5 (30-Dec-2021) 00:08:10.449 Discarding device blocks: 0/522240 done 00:08:10.449 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:10.449 Filesystem UUID: 4ebec055-6c1d-443a-b9d5-833a7495f969 00:08:10.449 Superblock backups stored on blocks: 00:08:10.449 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:10.449 00:08:10.449 Allocating group tables: 0/64 done 00:08:10.449 Writing inode tables: 0/64 done 00:08:10.449 Creating journal (8192 blocks): done 00:08:10.449 Writing superblocks and filesystem accounting information: 0/64 done 00:08:10.449 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:10.449 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 79235 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:10.706 00:08:10.706 real 0m0.342s 00:08:10.706 user 0m0.025s 00:08:10.706 sys 0m0.047s 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:10.706 ************************************ 00:08:10.706 END TEST filesystem_in_capsule_ext4 00:08:10.706 ************************************ 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.706 ************************************ 00:08:10.706 START TEST filesystem_in_capsule_btrfs 00:08:10.706 ************************************ 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:10.706 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:10.706 btrfs-progs v6.6.2 00:08:10.706 See https://btrfs.readthedocs.io for more information. 00:08:10.706 00:08:10.707 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:10.707 NOTE: several default settings have changed in version 5.15, please make sure 00:08:10.707 this does not affect your deployments: 00:08:10.707 - DUP for metadata (-m dup) 00:08:10.707 - enabled no-holes (-O no-holes) 00:08:10.707 - enabled free-space-tree (-R free-space-tree) 00:08:10.707 00:08:10.707 Label: (null) 00:08:10.707 UUID: 8040b2e1-0bac-492a-82d8-1d6be35ae9c1 00:08:10.707 Node size: 16384 00:08:10.707 Sector size: 4096 00:08:10.707 Filesystem size: 510.00MiB 00:08:10.707 Block group profiles: 00:08:10.707 Data: single 8.00MiB 00:08:10.707 Metadata: DUP 32.00MiB 00:08:10.707 System: DUP 8.00MiB 00:08:10.707 SSD detected: yes 00:08:10.707 Zoned device: no 00:08:10.707 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:10.707 Runtime features: free-space-tree 00:08:10.707 Checksum: crc32c 00:08:10.707 Number of devices: 1 00:08:10.707 Devices: 00:08:10.707 ID SIZE PATH 00:08:10.707 1 510.00MiB /dev/nvme0n1p1 00:08:10.707 00:08:10.707 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:10.707 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:10.707 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:10.707 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:10.707 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:10.707 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:10.707 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:10.707 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:10.707 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 79235 00:08:10.707 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:10.707 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:10.964 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:10.964 14:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:10.964 00:08:10.964 real 0m0.190s 00:08:10.964 user 0m0.025s 00:08:10.964 sys 0m0.058s 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:10.964 ************************************ 00:08:10.964 END TEST filesystem_in_capsule_btrfs 00:08:10.964 ************************************ 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.964 ************************************ 00:08:10.964 START TEST filesystem_in_capsule_xfs 00:08:10.964 ************************************ 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:10.964 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:10.964 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:10.964 = sectsz=512 attr=2, projid32bit=1 00:08:10.964 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:10.964 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:10.964 data = bsize=4096 blocks=130560, imaxpct=25 00:08:10.964 = sunit=0 swidth=0 blks 00:08:10.964 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:10.964 log =internal log bsize=4096 blocks=16384, version=2 00:08:10.964 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:10.964 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:11.529 Discarding blocks...Done. 00:08:11.529 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:11.529 14:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 79235 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.427 00:08:13.427 real 0m2.571s 00:08:13.427 user 0m0.016s 00:08:13.427 sys 0m0.050s 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:13.427 ************************************ 00:08:13.427 END TEST filesystem_in_capsule_xfs 00:08:13.427 ************************************ 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:13.427 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:13.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 79235 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 79235 ']' 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 79235 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79235 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:13.685 killing process with pid 79235 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79235' 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 79235 00:08:13.685 14:26:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 79235 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:13.943 00:08:13.943 real 0m7.607s 00:08:13.943 user 0m28.311s 00:08:13.943 sys 0m1.491s 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.943 ************************************ 00:08:13.943 END TEST nvmf_filesystem_in_capsule 00:08:13.943 ************************************ 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.943 rmmod nvme_tcp 00:08:13.943 rmmod nvme_fabrics 00:08:13.943 rmmod nvme_keyring 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:13.943 00:08:13.943 real 0m16.604s 00:08:13.943 user 0m59.081s 00:08:13.943 sys 0m3.424s 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.943 14:26:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.943 ************************************ 00:08:13.943 END TEST nvmf_filesystem 00:08:13.943 ************************************ 00:08:14.202 14:26:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:14.202 14:26:26 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:14.202 14:26:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:14.202 14:26:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.202 14:26:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:14.202 ************************************ 00:08:14.202 START TEST nvmf_target_discovery 00:08:14.202 ************************************ 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:14.202 * Looking for test storage... 00:08:14.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.202 14:26:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:14.203 Cannot find device "nvmf_tgt_br" 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.203 Cannot find device "nvmf_tgt_br2" 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:14.203 Cannot find device "nvmf_tgt_br" 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:14.203 Cannot find device "nvmf_tgt_br2" 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:14.203 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:14.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:08:14.461 00:08:14.461 --- 10.0.0.2 ping statistics --- 00:08:14.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.461 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:14.461 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:14.461 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:08:14.461 00:08:14.461 --- 10.0.0.3 ping statistics --- 00:08:14.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.461 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:14.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:08:14.461 00:08:14.461 --- 10.0.0.1 ping statistics --- 00:08:14.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.461 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=79676 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 79676 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 79676 ']' 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.461 14:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.720 [2024-07-10 14:26:26.774715] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:08:14.720 [2024-07-10 14:26:26.774803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.720 [2024-07-10 14:26:26.896556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:14.720 [2024-07-10 14:26:26.910954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.720 [2024-07-10 14:26:26.953421] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.720 [2024-07-10 14:26:26.953517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.720 [2024-07-10 14:26:26.953536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.720 [2024-07-10 14:26:26.953549] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.720 [2024-07-10 14:26:26.953561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.720 [2024-07-10 14:26:26.953756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.720 [2024-07-10 14:26:26.954030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.720 [2024-07-10 14:26:26.954499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.720 [2024-07-10 14:26:26.954515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.978 [2024-07-10 14:26:27.083779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.978 Null1 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.978 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 [2024-07-10 14:26:27.137969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 Null2 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 Null3 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 Null4 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.979 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -a 10.0.0.2 -s 4420 00:08:15.238 00:08:15.238 Discovery Log Number of Records 6, Generation counter 6 00:08:15.238 =====Discovery Log Entry 0====== 00:08:15.238 trtype: tcp 00:08:15.238 adrfam: ipv4 00:08:15.238 subtype: current discovery subsystem 00:08:15.238 treq: not required 00:08:15.238 portid: 0 00:08:15.238 trsvcid: 4420 00:08:15.238 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:15.238 traddr: 10.0.0.2 00:08:15.238 eflags: explicit discovery connections, duplicate discovery information 00:08:15.238 sectype: none 00:08:15.238 =====Discovery Log Entry 1====== 00:08:15.238 trtype: tcp 00:08:15.238 adrfam: ipv4 00:08:15.238 subtype: nvme subsystem 00:08:15.238 treq: not required 00:08:15.238 portid: 0 00:08:15.238 trsvcid: 4420 00:08:15.238 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:15.238 traddr: 10.0.0.2 00:08:15.238 eflags: none 00:08:15.238 sectype: none 00:08:15.238 =====Discovery Log Entry 2====== 00:08:15.238 trtype: tcp 00:08:15.238 adrfam: ipv4 00:08:15.238 subtype: nvme subsystem 00:08:15.238 treq: not required 00:08:15.238 portid: 0 00:08:15.238 trsvcid: 4420 00:08:15.238 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:15.238 traddr: 10.0.0.2 00:08:15.238 eflags: none 00:08:15.238 sectype: none 00:08:15.238 =====Discovery Log Entry 3====== 00:08:15.238 trtype: tcp 00:08:15.238 adrfam: ipv4 00:08:15.238 subtype: nvme subsystem 00:08:15.238 treq: not required 00:08:15.238 portid: 0 00:08:15.238 trsvcid: 4420 00:08:15.238 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:15.238 traddr: 10.0.0.2 00:08:15.238 eflags: none 00:08:15.238 sectype: none 00:08:15.238 =====Discovery Log Entry 4====== 00:08:15.238 trtype: tcp 00:08:15.238 adrfam: ipv4 00:08:15.238 subtype: nvme subsystem 00:08:15.238 treq: not required 00:08:15.238 portid: 0 00:08:15.238 trsvcid: 4420 00:08:15.238 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:15.238 traddr: 10.0.0.2 00:08:15.238 eflags: none 00:08:15.238 sectype: none 00:08:15.238 =====Discovery Log Entry 5====== 00:08:15.238 trtype: tcp 00:08:15.238 adrfam: ipv4 00:08:15.238 subtype: discovery subsystem referral 00:08:15.238 treq: not required 00:08:15.238 portid: 0 00:08:15.238 trsvcid: 4430 00:08:15.238 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:15.238 traddr: 10.0.0.2 00:08:15.238 eflags: none 00:08:15.238 sectype: none 00:08:15.238 Perform nvmf subsystem discovery via RPC 00:08:15.238 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:15.238 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:15.238 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.238 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.238 [ 00:08:15.238 { 00:08:15.238 "allow_any_host": true, 00:08:15.238 "hosts": [], 00:08:15.238 "listen_addresses": [ 00:08:15.238 { 00:08:15.238 "adrfam": "IPv4", 00:08:15.238 "traddr": "10.0.0.2", 00:08:15.238 "trsvcid": "4420", 00:08:15.238 "trtype": "TCP" 00:08:15.238 } 00:08:15.238 ], 00:08:15.238 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:15.238 "subtype": "Discovery" 00:08:15.238 }, 00:08:15.238 { 00:08:15.238 "allow_any_host": true, 00:08:15.239 "hosts": [], 00:08:15.239 "listen_addresses": [ 00:08:15.239 { 00:08:15.239 "adrfam": "IPv4", 00:08:15.239 "traddr": "10.0.0.2", 00:08:15.239 "trsvcid": "4420", 00:08:15.239 "trtype": "TCP" 00:08:15.239 } 00:08:15.239 ], 00:08:15.239 "max_cntlid": 65519, 00:08:15.239 "max_namespaces": 32, 00:08:15.239 "min_cntlid": 1, 00:08:15.239 "model_number": "SPDK bdev Controller", 00:08:15.239 "namespaces": [ 00:08:15.239 { 00:08:15.239 "bdev_name": "Null1", 00:08:15.239 "name": "Null1", 00:08:15.239 "nguid": "462291BDD9EF45EC9B66AEC0CD9A3FC9", 00:08:15.239 "nsid": 1, 00:08:15.239 "uuid": "462291bd-d9ef-45ec-9b66-aec0cd9a3fc9" 00:08:15.239 } 00:08:15.239 ], 00:08:15.239 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.239 "serial_number": "SPDK00000000000001", 00:08:15.239 "subtype": "NVMe" 00:08:15.239 }, 00:08:15.239 { 00:08:15.239 "allow_any_host": true, 00:08:15.239 "hosts": [], 00:08:15.239 "listen_addresses": [ 00:08:15.239 { 00:08:15.239 "adrfam": "IPv4", 00:08:15.239 "traddr": "10.0.0.2", 00:08:15.239 "trsvcid": "4420", 00:08:15.239 "trtype": "TCP" 00:08:15.239 } 00:08:15.239 ], 00:08:15.239 "max_cntlid": 65519, 00:08:15.239 "max_namespaces": 32, 00:08:15.239 "min_cntlid": 1, 00:08:15.239 "model_number": "SPDK bdev Controller", 00:08:15.239 "namespaces": [ 00:08:15.239 { 00:08:15.239 "bdev_name": "Null2", 00:08:15.239 "name": "Null2", 00:08:15.239 "nguid": "D995753F7CC94D909262C62CE9FDD750", 00:08:15.239 "nsid": 1, 00:08:15.239 "uuid": "d995753f-7cc9-4d90-9262-c62ce9fdd750" 00:08:15.239 } 00:08:15.239 ], 00:08:15.239 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:15.239 "serial_number": "SPDK00000000000002", 00:08:15.239 "subtype": "NVMe" 00:08:15.239 }, 00:08:15.239 { 00:08:15.239 "allow_any_host": true, 00:08:15.239 "hosts": [], 00:08:15.239 "listen_addresses": [ 00:08:15.239 { 00:08:15.239 "adrfam": "IPv4", 00:08:15.239 "traddr": "10.0.0.2", 00:08:15.239 "trsvcid": "4420", 00:08:15.239 "trtype": "TCP" 00:08:15.239 } 00:08:15.239 ], 00:08:15.239 "max_cntlid": 65519, 00:08:15.239 "max_namespaces": 32, 00:08:15.239 "min_cntlid": 1, 00:08:15.239 "model_number": "SPDK bdev Controller", 00:08:15.239 "namespaces": [ 00:08:15.239 { 00:08:15.239 "bdev_name": "Null3", 00:08:15.239 "name": "Null3", 00:08:15.239 "nguid": "0AA6F8765A964BD7B1DA6C92C8C1BA58", 00:08:15.239 "nsid": 1, 00:08:15.239 "uuid": "0aa6f876-5a96-4bd7-b1da-6c92c8c1ba58" 00:08:15.239 } 00:08:15.239 ], 00:08:15.239 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:15.239 "serial_number": "SPDK00000000000003", 00:08:15.239 "subtype": "NVMe" 00:08:15.239 }, 00:08:15.239 { 00:08:15.239 "allow_any_host": true, 00:08:15.239 "hosts": [], 00:08:15.239 "listen_addresses": [ 00:08:15.239 { 00:08:15.239 "adrfam": "IPv4", 00:08:15.239 "traddr": "10.0.0.2", 00:08:15.239 "trsvcid": "4420", 00:08:15.239 "trtype": "TCP" 00:08:15.239 } 00:08:15.239 ], 00:08:15.239 "max_cntlid": 65519, 00:08:15.239 "max_namespaces": 32, 00:08:15.239 "min_cntlid": 1, 00:08:15.239 "model_number": "SPDK bdev Controller", 00:08:15.239 "namespaces": [ 00:08:15.239 { 00:08:15.239 "bdev_name": "Null4", 00:08:15.239 "name": "Null4", 00:08:15.239 "nguid": "D7BCCC2376A447BEACCEFC1B30A59AF0", 00:08:15.239 "nsid": 1, 00:08:15.239 "uuid": "d7bccc23-76a4-47be-acce-fc1b30a59af0" 00:08:15.239 } 00:08:15.239 ], 00:08:15.239 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:15.239 "serial_number": "SPDK00000000000004", 00:08:15.239 "subtype": "NVMe" 00:08:15.239 } 00:08:15.239 ] 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:15.239 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:15.239 rmmod nvme_tcp 00:08:15.239 rmmod nvme_fabrics 00:08:15.497 rmmod nvme_keyring 00:08:15.497 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:15.497 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:15.497 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:15.497 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 79676 ']' 00:08:15.497 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 79676 00:08:15.497 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 79676 ']' 00:08:15.497 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 79676 00:08:15.497 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:15.497 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:15.497 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79676 00:08:15.497 killing process with pid 79676 00:08:15.497 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:15.497 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:15.497 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79676' 00:08:15.498 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 79676 00:08:15.498 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 79676 00:08:15.498 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:15.498 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:15.498 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:15.498 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.498 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.498 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.498 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.498 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.498 14:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:15.498 00:08:15.498 real 0m1.527s 00:08:15.498 user 0m3.283s 00:08:15.498 sys 0m0.502s 00:08:15.498 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.498 14:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.498 ************************************ 00:08:15.498 END TEST nvmf_target_discovery 00:08:15.498 ************************************ 00:08:15.756 14:26:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:15.756 14:26:27 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:15.756 14:26:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:15.756 14:26:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.756 14:26:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.756 ************************************ 00:08:15.756 START TEST nvmf_referrals 00:08:15.756 ************************************ 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:15.756 * Looking for test storage... 00:08:15.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:15.756 Cannot find device "nvmf_tgt_br" 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:15.756 Cannot find device "nvmf_tgt_br2" 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:15.756 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:15.756 Cannot find device "nvmf_tgt_br" 00:08:15.757 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:08:15.757 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:15.757 Cannot find device "nvmf_tgt_br2" 00:08:15.757 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:08:15.757 14:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:15.757 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:15.757 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.757 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:16.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:16.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:08:16.014 00:08:16.014 --- 10.0.0.2 ping statistics --- 00:08:16.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.014 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:16.014 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:16.014 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:08:16.014 00:08:16.014 --- 10.0.0.3 ping statistics --- 00:08:16.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.014 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:16.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:08:16.014 00:08:16.014 --- 10.0.0.1 ping statistics --- 00:08:16.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.014 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.014 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=79881 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 79881 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 79881 ']' 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.015 14:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.272 [2024-07-10 14:26:28.365409] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:08:16.272 [2024-07-10 14:26:28.365511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.272 [2024-07-10 14:26:28.496058] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.272 [2024-07-10 14:26:28.519856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.531 [2024-07-10 14:26:28.562729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.531 [2024-07-10 14:26:28.563018] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.531 [2024-07-10 14:26:28.563184] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.531 [2024-07-10 14:26:28.563535] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.531 [2024-07-10 14:26:28.563672] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.531 [2024-07-10 14:26:28.563868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.531 [2024-07-10 14:26:28.563921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.531 [2024-07-10 14:26:28.564527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.531 [2024-07-10 14:26:28.564538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.484 [2024-07-10 14:26:29.494904] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.484 [2024-07-10 14:26:29.518765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.484 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.743 14:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:18.001 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:18.258 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.516 rmmod nvme_tcp 00:08:18.516 rmmod nvme_fabrics 00:08:18.516 rmmod nvme_keyring 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 79881 ']' 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 79881 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 79881 ']' 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 79881 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79881 00:08:18.516 killing process with pid 79881 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79881' 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 79881 00:08:18.516 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 79881 00:08:18.774 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.774 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.774 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.774 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.774 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.774 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.774 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.774 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.774 14:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:18.774 ************************************ 00:08:18.774 END TEST nvmf_referrals 00:08:18.774 00:08:18.774 real 0m3.124s 00:08:18.774 user 0m10.533s 00:08:18.774 sys 0m0.795s 00:08:18.774 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.774 14:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.774 ************************************ 00:08:18.774 14:26:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:18.774 14:26:30 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:18.774 14:26:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:18.774 14:26:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.774 14:26:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:18.774 ************************************ 00:08:18.774 START TEST nvmf_connect_disconnect 00:08:18.774 ************************************ 00:08:18.774 14:26:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:18.774 * Looking for test storage... 00:08:18.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:18.774 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.031 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:19.032 Cannot find device "nvmf_tgt_br" 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:19.032 Cannot find device "nvmf_tgt_br2" 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:19.032 Cannot find device "nvmf_tgt_br" 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:19.032 Cannot find device "nvmf_tgt_br2" 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:19.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:19.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:19.032 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:19.289 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:19.289 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:19.289 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:19.289 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:19.289 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:19.289 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:19.289 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:19.289 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:19.289 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:19.289 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:19.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:08:19.289 00:08:19.289 --- 10.0.0.2 ping statistics --- 00:08:19.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.289 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:08:19.289 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:19.289 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:19.289 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:08:19.289 00:08:19.289 --- 10.0.0.3 ping statistics --- 00:08:19.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.290 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:19.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:08:19.290 00:08:19.290 --- 10.0.0.1 ping statistics --- 00:08:19.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.290 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=80187 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 80187 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 80187 ']' 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.290 14:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.290 [2024-07-10 14:26:31.530344] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:08:19.290 [2024-07-10 14:26:31.530747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.547 [2024-07-10 14:26:31.660395] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:19.547 [2024-07-10 14:26:31.687527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.547 [2024-07-10 14:26:31.733495] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.547 [2024-07-10 14:26:31.733859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.547 [2024-07-10 14:26:31.733886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.547 [2024-07-10 14:26:31.733900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.547 [2024-07-10 14:26:31.733913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.547 [2024-07-10 14:26:31.734064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.547 [2024-07-10 14:26:31.734201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.547 [2024-07-10 14:26:31.734335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.547 [2024-07-10 14:26:31.734331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.478 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.478 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:20.478 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.478 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.478 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:20.478 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.478 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:20.478 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.478 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:20.478 [2024-07-10 14:26:32.544579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:20.479 [2024-07-10 14:26:32.611871] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:20.479 14:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:23.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.688 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:02.688 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:02.688 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:02.688 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:02.688 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:02.688 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:02.688 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:02.688 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:02.688 rmmod nvme_tcp 00:12:02.688 rmmod nvme_fabrics 00:12:02.688 rmmod nvme_keyring 00:12:02.688 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:02.688 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:02.688 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 80187 ']' 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 80187 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 80187 ']' 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 80187 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80187 00:12:02.689 killing process with pid 80187 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80187' 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 80187 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 80187 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:02.689 00:12:02.689 real 3m43.701s 00:12:02.689 user 14m22.720s 00:12:02.689 sys 0m28.096s 00:12:02.689 ************************************ 00:12:02.689 END TEST nvmf_connect_disconnect 00:12:02.689 ************************************ 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:02.689 14:30:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.689 14:30:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:02.689 14:30:14 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:02.689 14:30:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:02.689 14:30:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.689 14:30:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:02.689 ************************************ 00:12:02.689 START TEST nvmf_multitarget 00:12:02.689 ************************************ 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:02.689 * Looking for test storage... 00:12:02.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:02.689 Cannot find device "nvmf_tgt_br" 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:02.689 Cannot find device "nvmf_tgt_br2" 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:02.689 Cannot find device "nvmf_tgt_br" 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:02.689 Cannot find device "nvmf_tgt_br2" 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:02.689 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:02.976 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:02.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.976 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:12:02.976 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:02.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.976 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:12:02.976 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:02.976 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:02.976 14:30:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:02.976 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:02.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:12:02.976 00:12:02.977 --- 10.0.0.2 ping statistics --- 00:12:02.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.977 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:02.977 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:02.977 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:12:02.977 00:12:02.977 --- 10.0.0.3 ping statistics --- 00:12:02.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.977 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:02.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:02.977 00:12:02.977 --- 10.0.0.1 ping statistics --- 00:12:02.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.977 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=83944 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 83944 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 83944 ']' 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:02.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:02.977 14:30:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:03.266 [2024-07-10 14:30:15.254151] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:12:03.266 [2024-07-10 14:30:15.254245] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.266 [2024-07-10 14:30:15.374432] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:03.266 [2024-07-10 14:30:15.392449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.266 [2024-07-10 14:30:15.428343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.266 [2024-07-10 14:30:15.428595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.266 [2024-07-10 14:30:15.428792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.266 [2024-07-10 14:30:15.428937] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.266 [2024-07-10 14:30:15.428980] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.266 [2024-07-10 14:30:15.429163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.266 [2024-07-10 14:30:15.429269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.266 [2024-07-10 14:30:15.430122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.266 [2024-07-10 14:30:15.430189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.266 14:30:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:03.266 14:30:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:03.266 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:03.266 14:30:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:03.266 14:30:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:03.266 14:30:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.266 14:30:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:03.266 14:30:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:03.266 14:30:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:03.526 14:30:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:03.526 14:30:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:03.526 "nvmf_tgt_1" 00:12:03.527 14:30:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:03.785 "nvmf_tgt_2" 00:12:03.785 14:30:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:03.785 14:30:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:03.785 14:30:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:03.785 14:30:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:04.042 true 00:12:04.042 14:30:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:04.042 true 00:12:04.042 14:30:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:04.042 14:30:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:04.301 rmmod nvme_tcp 00:12:04.301 rmmod nvme_fabrics 00:12:04.301 rmmod nvme_keyring 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 83944 ']' 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 83944 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 83944 ']' 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 83944 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83944 00:12:04.301 killing process with pid 83944 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83944' 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 83944 00:12:04.301 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 83944 00:12:04.560 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:04.560 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:04.560 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:04.560 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.560 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:04.560 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.560 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.560 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.560 14:30:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:04.560 00:12:04.560 real 0m2.000s 00:12:04.560 user 0m6.029s 00:12:04.560 sys 0m0.577s 00:12:04.560 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:04.560 14:30:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:04.560 ************************************ 00:12:04.560 END TEST nvmf_multitarget 00:12:04.560 ************************************ 00:12:04.560 14:30:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:04.560 14:30:16 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:04.560 14:30:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:04.560 14:30:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.560 14:30:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:04.560 ************************************ 00:12:04.560 START TEST nvmf_rpc 00:12:04.560 ************************************ 00:12:04.560 14:30:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:04.818 * Looking for test storage... 00:12:04.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.818 14:30:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:04.819 Cannot find device "nvmf_tgt_br" 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:04.819 Cannot find device "nvmf_tgt_br2" 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:04.819 Cannot find device "nvmf_tgt_br" 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:04.819 Cannot find device "nvmf_tgt_br2" 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:12:04.819 14:30:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:04.819 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:04.819 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:04.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.819 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:12:04.819 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:04.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.819 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:12:04.819 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:04.819 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:04.819 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:04.819 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:04.819 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:04.819 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:05.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:12:05.077 00:12:05.077 --- 10.0.0.2 ping statistics --- 00:12:05.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.077 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:05.077 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:05.077 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:12:05.077 00:12:05.077 --- 10.0.0.3 ping statistics --- 00:12:05.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.077 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:05.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:05.077 00:12:05.077 --- 10.0.0.1 ping statistics --- 00:12:05.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.077 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=84164 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 84164 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 84164 ']' 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.077 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.078 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.078 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.078 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.078 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.078 [2024-07-10 14:30:17.335887] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:12:05.078 [2024-07-10 14:30:17.335996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.335 [2024-07-10 14:30:17.458023] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:05.335 [2024-07-10 14:30:17.474792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.335 [2024-07-10 14:30:17.519236] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.335 [2024-07-10 14:30:17.519340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.335 [2024-07-10 14:30:17.519365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.335 [2024-07-10 14:30:17.519381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.335 [2024-07-10 14:30:17.519394] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.335 [2024-07-10 14:30:17.519653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.335 [2024-07-10 14:30:17.519899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.335 [2024-07-10 14:30:17.520779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.335 [2024-07-10 14:30:17.520800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.335 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.335 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:05.335 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:05.335 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:05.335 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:05.593 "poll_groups": [ 00:12:05.593 { 00:12:05.593 "admin_qpairs": 0, 00:12:05.593 "completed_nvme_io": 0, 00:12:05.593 "current_admin_qpairs": 0, 00:12:05.593 "current_io_qpairs": 0, 00:12:05.593 "io_qpairs": 0, 00:12:05.593 "name": "nvmf_tgt_poll_group_000", 00:12:05.593 "pending_bdev_io": 0, 00:12:05.593 "transports": [] 00:12:05.593 }, 00:12:05.593 { 00:12:05.593 "admin_qpairs": 0, 00:12:05.593 "completed_nvme_io": 0, 00:12:05.593 "current_admin_qpairs": 0, 00:12:05.593 "current_io_qpairs": 0, 00:12:05.593 "io_qpairs": 0, 00:12:05.593 "name": "nvmf_tgt_poll_group_001", 00:12:05.593 "pending_bdev_io": 0, 00:12:05.593 "transports": [] 00:12:05.593 }, 00:12:05.593 { 00:12:05.593 "admin_qpairs": 0, 00:12:05.593 "completed_nvme_io": 0, 00:12:05.593 "current_admin_qpairs": 0, 00:12:05.593 "current_io_qpairs": 0, 00:12:05.593 "io_qpairs": 0, 00:12:05.593 "name": "nvmf_tgt_poll_group_002", 00:12:05.593 "pending_bdev_io": 0, 00:12:05.593 "transports": [] 00:12:05.593 }, 00:12:05.593 { 00:12:05.593 "admin_qpairs": 0, 00:12:05.593 "completed_nvme_io": 0, 00:12:05.593 "current_admin_qpairs": 0, 00:12:05.593 "current_io_qpairs": 0, 00:12:05.593 "io_qpairs": 0, 00:12:05.593 "name": "nvmf_tgt_poll_group_003", 00:12:05.593 "pending_bdev_io": 0, 00:12:05.593 "transports": [] 00:12:05.593 } 00:12:05.593 ], 00:12:05.593 "tick_rate": 2200000000 00:12:05.593 }' 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.593 [2024-07-10 14:30:17.780034] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:05.593 "poll_groups": [ 00:12:05.593 { 00:12:05.593 "admin_qpairs": 0, 00:12:05.593 "completed_nvme_io": 0, 00:12:05.593 "current_admin_qpairs": 0, 00:12:05.593 "current_io_qpairs": 0, 00:12:05.593 "io_qpairs": 0, 00:12:05.593 "name": "nvmf_tgt_poll_group_000", 00:12:05.593 "pending_bdev_io": 0, 00:12:05.593 "transports": [ 00:12:05.593 { 00:12:05.593 "trtype": "TCP" 00:12:05.593 } 00:12:05.593 ] 00:12:05.593 }, 00:12:05.593 { 00:12:05.593 "admin_qpairs": 0, 00:12:05.593 "completed_nvme_io": 0, 00:12:05.593 "current_admin_qpairs": 0, 00:12:05.593 "current_io_qpairs": 0, 00:12:05.593 "io_qpairs": 0, 00:12:05.593 "name": "nvmf_tgt_poll_group_001", 00:12:05.593 "pending_bdev_io": 0, 00:12:05.593 "transports": [ 00:12:05.593 { 00:12:05.593 "trtype": "TCP" 00:12:05.593 } 00:12:05.593 ] 00:12:05.593 }, 00:12:05.593 { 00:12:05.593 "admin_qpairs": 0, 00:12:05.593 "completed_nvme_io": 0, 00:12:05.593 "current_admin_qpairs": 0, 00:12:05.593 "current_io_qpairs": 0, 00:12:05.593 "io_qpairs": 0, 00:12:05.593 "name": "nvmf_tgt_poll_group_002", 00:12:05.593 "pending_bdev_io": 0, 00:12:05.593 "transports": [ 00:12:05.593 { 00:12:05.593 "trtype": "TCP" 00:12:05.593 } 00:12:05.593 ] 00:12:05.593 }, 00:12:05.593 { 00:12:05.593 "admin_qpairs": 0, 00:12:05.593 "completed_nvme_io": 0, 00:12:05.593 "current_admin_qpairs": 0, 00:12:05.593 "current_io_qpairs": 0, 00:12:05.593 "io_qpairs": 0, 00:12:05.593 "name": "nvmf_tgt_poll_group_003", 00:12:05.593 "pending_bdev_io": 0, 00:12:05.593 "transports": [ 00:12:05.593 { 00:12:05.593 "trtype": "TCP" 00:12:05.593 } 00:12:05.593 ] 00:12:05.593 } 00:12:05.593 ], 00:12:05.593 "tick_rate": 2200000000 00:12:05.593 }' 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:05.593 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.852 Malloc1 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.852 [2024-07-10 14:30:17.952561] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -a 10.0.0.2 -s 4420 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -a 10.0.0.2 -s 4420 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -a 10.0.0.2 -s 4420 00:12:05.852 [2024-07-10 14:30:17.980868] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9' 00:12:05.852 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:05.852 could not add new controller: failed to write to nvme-fabrics device 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.852 14:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:06.110 14:30:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:06.110 14:30:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:06.110 14:30:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.110 14:30:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:06.110 14:30:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:08.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.010 [2024-07-10 14:30:20.273658] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9' 00:12:08.010 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:08.010 could not add new controller: failed to write to nvme-fabrics device 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.010 14:30:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.269 14:30:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.269 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:08.269 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.269 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:08.269 14:30:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.798 [2024-07-10 14:30:22.567500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:10.798 14:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.742 [2024-07-10 14:30:24.862704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.742 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.743 14:30:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:12.743 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.743 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.743 14:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.743 14:30:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.000 14:30:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.000 14:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:13.000 14:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.000 14:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:13.000 14:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.900 [2024-07-10 14:30:27.181724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.900 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.158 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.158 14:30:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.158 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.158 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.158 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.158 14:30:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.158 14:30:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.158 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:15.158 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.158 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:15.159 14:30:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.690 [2024-07-10 14:30:29.577079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:17.690 14:30:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.594 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.852 [2024-07-10 14:30:31.884479] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.852 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.852 14:30:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:19.852 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.852 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.852 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.852 14:30:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.852 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.852 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.852 14:30:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.852 14:30:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.852 14:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.852 14:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:19.852 14:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.852 14:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:19.852 14:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 [2024-07-10 14:30:34.183570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 [2024-07-10 14:30:34.231659] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 [2024-07-10 14:30:34.279677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 [2024-07-10 14:30:34.327752] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.385 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.386 [2024-07-10 14:30:34.379803] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:22.386 "poll_groups": [ 00:12:22.386 { 00:12:22.386 "admin_qpairs": 2, 00:12:22.386 "completed_nvme_io": 65, 00:12:22.386 "current_admin_qpairs": 0, 00:12:22.386 "current_io_qpairs": 0, 00:12:22.386 "io_qpairs": 16, 00:12:22.386 "name": "nvmf_tgt_poll_group_000", 00:12:22.386 "pending_bdev_io": 0, 00:12:22.386 "transports": [ 00:12:22.386 { 00:12:22.386 "trtype": "TCP" 00:12:22.386 } 00:12:22.386 ] 00:12:22.386 }, 00:12:22.386 { 00:12:22.386 "admin_qpairs": 3, 00:12:22.386 "completed_nvme_io": 118, 00:12:22.386 "current_admin_qpairs": 0, 00:12:22.386 "current_io_qpairs": 0, 00:12:22.386 "io_qpairs": 17, 00:12:22.386 "name": "nvmf_tgt_poll_group_001", 00:12:22.386 "pending_bdev_io": 0, 00:12:22.386 "transports": [ 00:12:22.386 { 00:12:22.386 "trtype": "TCP" 00:12:22.386 } 00:12:22.386 ] 00:12:22.386 }, 00:12:22.386 { 00:12:22.386 "admin_qpairs": 1, 00:12:22.386 "completed_nvme_io": 169, 00:12:22.386 "current_admin_qpairs": 0, 00:12:22.386 "current_io_qpairs": 0, 00:12:22.386 "io_qpairs": 19, 00:12:22.386 "name": "nvmf_tgt_poll_group_002", 00:12:22.386 "pending_bdev_io": 0, 00:12:22.386 "transports": [ 00:12:22.386 { 00:12:22.386 "trtype": "TCP" 00:12:22.386 } 00:12:22.386 ] 00:12:22.386 }, 00:12:22.386 { 00:12:22.386 "admin_qpairs": 1, 00:12:22.386 "completed_nvme_io": 68, 00:12:22.386 "current_admin_qpairs": 0, 00:12:22.386 "current_io_qpairs": 0, 00:12:22.386 "io_qpairs": 18, 00:12:22.386 "name": "nvmf_tgt_poll_group_003", 00:12:22.386 "pending_bdev_io": 0, 00:12:22.386 "transports": [ 00:12:22.386 { 00:12:22.386 "trtype": "TCP" 00:12:22.386 } 00:12:22.386 ] 00:12:22.386 } 00:12:22.386 ], 00:12:22.386 "tick_rate": 2200000000 00:12:22.386 }' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:22.386 rmmod nvme_tcp 00:12:22.386 rmmod nvme_fabrics 00:12:22.386 rmmod nvme_keyring 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 84164 ']' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 84164 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 84164 ']' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 84164 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84164 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84164' 00:12:22.386 killing process with pid 84164 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 84164 00:12:22.386 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 84164 00:12:22.644 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:22.644 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:22.644 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:22.644 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.644 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.644 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.644 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.644 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.644 14:30:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:22.644 00:12:22.644 real 0m18.058s 00:12:22.644 user 1m7.753s 00:12:22.644 sys 0m2.474s 00:12:22.644 ************************************ 00:12:22.644 END TEST nvmf_rpc 00:12:22.644 ************************************ 00:12:22.644 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:22.644 14:30:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.644 14:30:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:22.644 14:30:34 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:22.644 14:30:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:22.644 14:30:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.644 14:30:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:22.644 ************************************ 00:12:22.644 START TEST nvmf_invalid 00:12:22.644 ************************************ 00:12:22.644 14:30:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:22.902 * Looking for test storage... 00:12:22.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.902 14:30:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:22.902 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:22.903 Cannot find device "nvmf_tgt_br" 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:22.903 Cannot find device "nvmf_tgt_br2" 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:22.903 Cannot find device "nvmf_tgt_br" 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:22.903 Cannot find device "nvmf_tgt_br2" 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:22.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:22.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:22.903 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:23.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:12:23.161 00:12:23.161 --- 10.0.0.2 ping statistics --- 00:12:23.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.161 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:23.161 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:23.161 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:12:23.161 00:12:23.161 --- 10.0.0.3 ping statistics --- 00:12:23.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.161 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:23.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:23.161 00:12:23.161 --- 10.0.0.1 ping statistics --- 00:12:23.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.161 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:23.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=84661 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 84661 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 84661 ']' 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:23.161 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:23.161 [2024-07-10 14:30:35.440584] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:12:23.161 [2024-07-10 14:30:35.440677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.419 [2024-07-10 14:30:35.560077] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:23.419 [2024-07-10 14:30:35.578883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.419 [2024-07-10 14:30:35.624421] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.419 [2024-07-10 14:30:35.624714] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.419 [2024-07-10 14:30:35.625006] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.419 [2024-07-10 14:30:35.625211] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.419 [2024-07-10 14:30:35.625408] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.419 [2024-07-10 14:30:35.625707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.419 [2024-07-10 14:30:35.625845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.419 [2024-07-10 14:30:35.625912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.419 [2024-07-10 14:30:35.625918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.735 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:23.735 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:12:23.735 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:23.735 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:23.735 14:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:23.735 14:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.735 14:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:23.735 14:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3054 00:12:23.992 [2024-07-10 14:30:36.051369] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:23.992 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/10 14:30:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3054 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:23.992 request: 00:12:23.992 { 00:12:23.992 "method": "nvmf_create_subsystem", 00:12:23.992 "params": { 00:12:23.992 "nqn": "nqn.2016-06.io.spdk:cnode3054", 00:12:23.992 "tgt_name": "foobar" 00:12:23.992 } 00:12:23.992 } 00:12:23.992 Got JSON-RPC error response 00:12:23.992 GoRPCClient: error on JSON-RPC call' 00:12:23.992 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/10 14:30:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3054 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:23.992 request: 00:12:23.992 { 00:12:23.992 "method": "nvmf_create_subsystem", 00:12:23.992 "params": { 00:12:23.992 "nqn": "nqn.2016-06.io.spdk:cnode3054", 00:12:23.992 "tgt_name": "foobar" 00:12:23.992 } 00:12:23.992 } 00:12:23.992 Got JSON-RPC error response 00:12:23.992 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:23.992 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:23.992 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23750 00:12:24.250 [2024-07-10 14:30:36.343675] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23750: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:24.250 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/10 14:30:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23750 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:24.250 request: 00:12:24.250 { 00:12:24.250 "method": "nvmf_create_subsystem", 00:12:24.250 "params": { 00:12:24.250 "nqn": "nqn.2016-06.io.spdk:cnode23750", 00:12:24.250 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:24.250 } 00:12:24.250 } 00:12:24.250 Got JSON-RPC error response 00:12:24.250 GoRPCClient: error on JSON-RPC call' 00:12:24.250 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/10 14:30:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23750 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:24.250 request: 00:12:24.250 { 00:12:24.250 "method": "nvmf_create_subsystem", 00:12:24.250 "params": { 00:12:24.250 "nqn": "nqn.2016-06.io.spdk:cnode23750", 00:12:24.250 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:24.250 } 00:12:24.250 } 00:12:24.250 Got JSON-RPC error response 00:12:24.250 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:24.250 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:24.250 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8992 00:12:24.509 [2024-07-10 14:30:36.639928] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8992: invalid model number 'SPDK_Controller' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/10 14:30:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode8992], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:24.509 request: 00:12:24.509 { 00:12:24.509 "method": "nvmf_create_subsystem", 00:12:24.509 "params": { 00:12:24.509 "nqn": "nqn.2016-06.io.spdk:cnode8992", 00:12:24.509 "model_number": "SPDK_Controller\u001f" 00:12:24.509 } 00:12:24.509 } 00:12:24.509 Got JSON-RPC error response 00:12:24.509 GoRPCClient: error on JSON-RPC call' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/10 14:30:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode8992], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:24.509 request: 00:12:24.509 { 00:12:24.509 "method": "nvmf_create_subsystem", 00:12:24.509 "params": { 00:12:24.509 "nqn": "nqn.2016-06.io.spdk:cnode8992", 00:12:24.509 "model_number": "SPDK_Controller\u001f" 00:12:24.509 } 00:12:24.509 } 00:12:24.509 Got JSON-RPC error response 00:12:24.509 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.509 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ w == \- ]] 00:12:24.510 14:30:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'wW0o%x:gW%,UBv0+D{ /dev/null' 00:12:28.292 14:30:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.292 14:30:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:28.292 00:12:28.292 real 0m5.534s 00:12:28.292 user 0m22.634s 00:12:28.292 sys 0m1.223s 00:12:28.292 14:30:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:28.292 14:30:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:28.292 ************************************ 00:12:28.292 END TEST nvmf_invalid 00:12:28.292 ************************************ 00:12:28.292 14:30:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:28.292 14:30:40 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:28.292 14:30:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:28.292 14:30:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.292 14:30:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:28.292 ************************************ 00:12:28.292 START TEST nvmf_abort 00:12:28.292 ************************************ 00:12:28.292 14:30:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:28.292 * Looking for test storage... 00:12:28.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:28.292 14:30:40 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:28.292 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:28.292 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.292 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.292 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.292 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.292 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.292 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.292 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.292 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.292 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.292 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:28.550 Cannot find device "nvmf_tgt_br" 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:28.550 Cannot find device "nvmf_tgt_br2" 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:28.550 Cannot find device "nvmf_tgt_br" 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:28.550 Cannot find device "nvmf_tgt_br2" 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:28.550 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:28.550 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:28.550 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:28.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:12:28.808 00:12:28.808 --- 10.0.0.2 ping statistics --- 00:12:28.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.808 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:28.808 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:28.808 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:12:28.808 00:12:28.808 --- 10.0.0.3 ping statistics --- 00:12:28.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.808 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:28.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:12:28.808 00:12:28.808 --- 10.0.0.1 ping statistics --- 00:12:28.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.808 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=85153 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 85153 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 85153 ']' 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:28.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:28.808 14:30:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:28.808 [2024-07-10 14:30:41.008397] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:12:28.808 [2024-07-10 14:30:41.009014] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.066 [2024-07-10 14:30:41.131786] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:29.067 [2024-07-10 14:30:41.151422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:29.067 [2024-07-10 14:30:41.191399] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.067 [2024-07-10 14:30:41.191458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.067 [2024-07-10 14:30:41.191472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.067 [2024-07-10 14:30:41.191482] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.067 [2024-07-10 14:30:41.191491] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.067 [2024-07-10 14:30:41.194330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.067 [2024-07-10 14:30:41.194474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.067 [2024-07-10 14:30:41.194485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.067 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:29.067 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:12:29.067 14:30:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:29.067 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:29.067 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:29.067 14:30:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.067 14:30:41 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:29.067 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.067 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:29.067 [2024-07-10 14:30:41.319176] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.067 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.067 14:30:41 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:29.067 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.067 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:29.325 Malloc0 00:12:29.325 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.325 14:30:41 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:29.325 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.325 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:29.325 Delay0 00:12:29.325 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.325 14:30:41 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:29.325 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.325 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:29.325 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.326 14:30:41 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:29.326 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.326 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:29.326 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.326 14:30:41 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:29.326 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.326 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:29.326 [2024-07-10 14:30:41.387107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.326 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.326 14:30:41 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:29.326 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.326 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:29.326 14:30:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.326 14:30:41 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:29.326 [2024-07-10 14:30:41.571356] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:31.917 Initializing NVMe Controllers 00:12:31.917 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:31.917 controller IO queue size 128 less than required 00:12:31.917 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:31.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:31.917 Initialization complete. Launching workers. 00:12:31.917 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29806 00:12:31.917 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29867, failed to submit 62 00:12:31.917 success 29810, unsuccess 57, failed 0 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.917 rmmod nvme_tcp 00:12:31.917 rmmod nvme_fabrics 00:12:31.917 rmmod nvme_keyring 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 85153 ']' 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 85153 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 85153 ']' 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 85153 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85153 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:31.917 killing process with pid 85153 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85153' 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 85153 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 85153 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.917 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.918 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.918 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.918 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.918 14:30:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:31.918 00:12:31.918 real 0m3.427s 00:12:31.918 user 0m9.781s 00:12:31.918 sys 0m0.964s 00:12:31.918 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:31.918 ************************************ 00:12:31.918 END TEST nvmf_abort 00:12:31.918 14:30:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:31.918 ************************************ 00:12:31.918 14:30:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:31.918 14:30:43 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:31.918 14:30:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:31.918 14:30:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.918 14:30:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:31.918 ************************************ 00:12:31.918 START TEST nvmf_ns_hotplug_stress 00:12:31.918 ************************************ 00:12:31.918 14:30:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:31.918 * Looking for test storage... 00:12:31.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.918 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:31.919 Cannot find device "nvmf_tgt_br" 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.919 Cannot find device "nvmf_tgt_br2" 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:31.919 Cannot find device "nvmf_tgt_br" 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:31.919 Cannot find device "nvmf_tgt_br2" 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:31.919 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:32.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:12:32.178 00:12:32.178 --- 10.0.0.2 ping statistics --- 00:12:32.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.178 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:32.178 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:32.178 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:12:32.178 00:12:32.178 --- 10.0.0.3 ping statistics --- 00:12:32.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.178 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:32.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:12:32.178 00:12:32.178 --- 10.0.0.1 ping statistics --- 00:12:32.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.178 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=85379 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 85379 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 85379 ']' 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.178 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.178 [2024-07-10 14:30:44.466669] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:12:32.178 [2024-07-10 14:30:44.466768] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.436 [2024-07-10 14:30:44.592402] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:32.436 [2024-07-10 14:30:44.610382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:32.436 [2024-07-10 14:30:44.650751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.436 [2024-07-10 14:30:44.650805] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.436 [2024-07-10 14:30:44.650819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.437 [2024-07-10 14:30:44.650829] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.437 [2024-07-10 14:30:44.650839] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.437 [2024-07-10 14:30:44.650944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.437 [2024-07-10 14:30:44.651096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.437 [2024-07-10 14:30:44.651103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.696 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.696 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:12:32.696 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:32.696 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:32.696 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.696 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.696 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:32.696 14:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:32.954 [2024-07-10 14:30:44.997004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.954 14:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:33.212 14:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.212 [2024-07-10 14:30:45.485533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.470 14:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:33.470 14:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:34.038 Malloc0 00:12:34.038 14:30:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:34.296 Delay0 00:12:34.296 14:30:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.553 14:30:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:34.553 NULL1 00:12:34.553 14:30:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:34.811 14:30:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=85491 00:12:34.811 14:30:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:34.811 14:30:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.811 14:30:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:36.186 Read completed with error (sct=0, sc=11) 00:12:36.186 14:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.454 14:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:36.454 14:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:36.715 true 00:12:36.715 14:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:36.715 14:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.282 14:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.540 14:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:37.540 14:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:37.799 true 00:12:37.799 14:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:37.799 14:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.057 14:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.316 14:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:38.316 14:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:38.576 true 00:12:38.576 14:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:38.576 14:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.510 14:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.769 14:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:39.769 14:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:40.098 true 00:12:40.098 14:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:40.098 14:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.098 14:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.664 14:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:40.664 14:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:40.664 true 00:12:40.664 14:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:40.664 14:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.921 14:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.486 14:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:41.486 14:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:41.744 true 00:12:41.744 14:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:41.744 14:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.309 14:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.875 14:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:42.875 14:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:42.875 true 00:12:42.875 14:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:42.875 14:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.441 14:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.699 14:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:43.699 14:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:43.957 true 00:12:43.957 14:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:43.958 14:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.216 14:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.474 14:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:44.474 14:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:44.733 true 00:12:44.733 14:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:44.733 14:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.992 14:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.992 14:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:44.992 14:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:45.302 true 00:12:45.302 14:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:45.302 14:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.676 14:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.676 14:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:46.676 14:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:46.934 true 00:12:46.934 14:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:46.934 14:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.869 14:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.869 14:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:47.869 14:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:48.129 true 00:12:48.129 14:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:48.129 14:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.387 14:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.644 14:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:48.644 14:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:48.903 true 00:12:48.903 14:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:48.903 14:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.469 14:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.469 14:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:49.470 14:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:49.728 true 00:12:49.728 14:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:49.728 14:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.664 14:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.929 14:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:50.929 14:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:51.186 true 00:12:51.186 14:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:51.186 14:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.443 14:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.702 14:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:51.702 14:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:51.961 true 00:12:51.961 14:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:51.961 14:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.219 14:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.477 14:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:52.477 14:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:52.735 true 00:12:52.735 14:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:52.735 14:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.667 14:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.924 14:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:53.924 14:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:54.182 true 00:12:54.182 14:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:54.182 14:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.440 14:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.698 14:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:54.698 14:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:54.957 true 00:12:54.957 14:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:54.957 14:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.214 14:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.472 14:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:55.472 14:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:55.729 true 00:12:55.729 14:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:55.729 14:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.686 14:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.944 14:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:56.944 14:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:57.201 true 00:12:57.459 14:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:57.459 14:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.459 14:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.024 14:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:58.024 14:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:58.024 true 00:12:58.024 14:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:58.024 14:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.589 14:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.848 14:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:58.848 14:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:58.848 true 00:12:58.848 14:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:58.848 14:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.106 14:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.364 14:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:59.364 14:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:59.622 true 00:12:59.622 14:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:12:59.622 14:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.996 14:31:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.996 14:31:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:00.996 14:31:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:01.253 true 00:13:01.253 14:31:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:13:01.253 14:31:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.187 14:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.445 14:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:02.445 14:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:02.704 true 00:13:02.704 14:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:13:02.704 14:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.962 14:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.221 14:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:03.221 14:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:03.480 true 00:13:03.480 14:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:13:03.480 14:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.739 14:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.997 14:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:03.997 14:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:04.255 true 00:13:04.255 14:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:13:04.255 14:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.190 14:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.190 Initializing NVMe Controllers 00:13:05.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:05.190 Controller IO queue size 128, less than required. 00:13:05.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:05.190 Controller IO queue size 128, less than required. 00:13:05.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:05.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:05.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:05.190 Initialization complete. Launching workers. 00:13:05.190 ======================================================== 00:13:05.190 Latency(us) 00:13:05.190 Device Information : IOPS MiB/s Average min max 00:13:05.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 774.40 0.38 74268.49 2544.98 1110305.62 00:13:05.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9067.95 4.43 14115.53 3603.87 605567.87 00:13:05.190 ======================================================== 00:13:05.190 Total : 9842.34 4.81 18848.37 2544.98 1110305.62 00:13:05.190 00:13:05.449 14:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:05.449 14:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:05.706 true 00:13:05.706 14:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85491 00:13:05.706 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (85491) - No such process 00:13:05.706 14:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 85491 00:13:05.706 14:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.964 14:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:06.222 14:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:06.222 14:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:06.222 14:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:06.222 14:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.222 14:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:06.480 null0 00:13:06.480 14:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:06.480 14:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.480 14:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:06.739 null1 00:13:06.739 14:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:06.739 14:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.739 14:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:06.996 null2 00:13:06.996 14:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:06.996 14:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.996 14:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:07.254 null3 00:13:07.254 14:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:07.254 14:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:07.254 14:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:07.512 null4 00:13:07.512 14:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:07.512 14:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:07.512 14:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:07.770 null5 00:13:07.770 14:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:07.770 14:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:07.770 14:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:08.048 null6 00:13:08.048 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:08.048 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:08.048 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:08.307 null7 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:08.307 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 86521 86522 86525 86526 86528 86529 86532 86535 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.308 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:08.566 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:08.566 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.566 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:08.566 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:08.566 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:08.566 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.824 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:08.824 14:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.824 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:09.082 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.082 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.082 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:09.082 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.082 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.082 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:09.082 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.082 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.082 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:09.082 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.082 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:09.340 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:09.340 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:09.340 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:09.340 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.340 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:09.340 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.340 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.340 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:09.340 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:09.340 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.340 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.340 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:09.598 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.856 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.856 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.856 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:09.856 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:09.856 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:09.856 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:09.856 14:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:09.856 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.856 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.112 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:10.369 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.369 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.369 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.369 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:10.369 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.369 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:10.369 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:10.369 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.626 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:10.883 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.883 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.883 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:10.883 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.883 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.883 14:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:10.883 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.883 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.883 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.883 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:10.883 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:10.883 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.883 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:10.883 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.141 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.399 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:11.657 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:11.657 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:11.657 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:11.657 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.657 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.657 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:11.657 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.657 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:11.657 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.657 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.657 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:11.914 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:11.914 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.914 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.914 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.914 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.914 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:11.914 14:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:11.914 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.914 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.914 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:11.914 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.914 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:11.914 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.914 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.914 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.172 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:12.429 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.429 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:12.429 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:12.429 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.429 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.429 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:12.429 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.429 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.429 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:12.429 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.429 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.429 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.430 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:12.687 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:12.945 14:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:12.946 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.204 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.462 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:13.721 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.721 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.721 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.721 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.721 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.721 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:13.721 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.721 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.721 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.721 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.721 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.721 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.721 14:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.721 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:13.979 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.979 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.979 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.979 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:13.979 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:13.979 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.979 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.979 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.979 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.979 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.979 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.979 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.237 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.237 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.237 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.237 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.237 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.237 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.237 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.496 rmmod nvme_tcp 00:13:14.496 rmmod nvme_fabrics 00:13:14.496 rmmod nvme_keyring 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 85379 ']' 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 85379 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 85379 ']' 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 85379 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85379 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85379' 00:13:14.496 killing process with pid 85379 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 85379 00:13:14.496 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 85379 00:13:14.755 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.755 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.755 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.755 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.755 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.755 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.755 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.755 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.755 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:14.755 ************************************ 00:13:14.755 END TEST nvmf_ns_hotplug_stress 00:13:14.755 ************************************ 00:13:14.755 00:13:14.755 real 0m42.896s 00:13:14.755 user 3m30.428s 00:13:14.755 sys 0m12.683s 00:13:14.755 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:14.755 14:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.755 14:31:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:14.755 14:31:26 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:14.755 14:31:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:14.755 14:31:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.755 14:31:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:14.755 ************************************ 00:13:14.755 START TEST nvmf_connect_stress 00:13:14.755 ************************************ 00:13:14.755 14:31:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:14.755 * Looking for test storage... 00:13:14.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:14.755 14:31:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:14.755 14:31:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:14.755 14:31:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.755 14:31:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.755 14:31:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.755 14:31:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:14.755 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:14.756 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:14.756 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:15.014 Cannot find device "nvmf_tgt_br" 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:15.014 Cannot find device "nvmf_tgt_br2" 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:15.014 Cannot find device "nvmf_tgt_br" 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:15.014 Cannot find device "nvmf_tgt_br2" 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:15.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:15.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:15.014 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:15.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:13:15.273 00:13:15.273 --- 10.0.0.2 ping statistics --- 00:13:15.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.273 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:15.273 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:15.273 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:13:15.273 00:13:15.273 --- 10.0.0.3 ping statistics --- 00:13:15.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.273 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:15.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:13:15.273 00:13:15.273 --- 10.0.0.1 ping statistics --- 00:13:15.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.273 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=87841 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 87841 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 87841 ']' 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.273 14:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.273 [2024-07-10 14:31:27.425879] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:13:15.273 [2024-07-10 14:31:27.425979] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.273 [2024-07-10 14:31:27.548636] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:15.532 [2024-07-10 14:31:27.566326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.532 [2024-07-10 14:31:27.614362] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.532 [2024-07-10 14:31:27.614667] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.532 [2024-07-10 14:31:27.614899] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.532 [2024-07-10 14:31:27.615139] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.532 [2024-07-10 14:31:27.615359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.532 [2024-07-10 14:31:27.615650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.532 [2024-07-10 14:31:27.615724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.532 [2024-07-10 14:31:27.615732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.468 [2024-07-10 14:31:28.476218] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.468 [2024-07-10 14:31:28.493846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.468 NULL1 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=87894 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.468 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.469 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.727 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.727 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:16.727 14:31:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.727 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.727 14:31:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.985 14:31:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.985 14:31:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:16.985 14:31:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.985 14:31:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.985 14:31:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.552 14:31:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.552 14:31:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:17.552 14:31:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.552 14:31:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.552 14:31:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.811 14:31:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.811 14:31:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:17.811 14:31:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.811 14:31:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.811 14:31:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.069 14:31:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.069 14:31:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:18.069 14:31:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.069 14:31:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.069 14:31:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.328 14:31:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.328 14:31:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:18.328 14:31:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.328 14:31:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.328 14:31:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.587 14:31:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.587 14:31:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:18.587 14:31:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.587 14:31:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.587 14:31:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.155 14:31:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.155 14:31:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:19.155 14:31:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.155 14:31:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.155 14:31:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.413 14:31:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.414 14:31:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:19.414 14:31:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.414 14:31:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.414 14:31:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.672 14:31:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.672 14:31:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:19.672 14:31:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.672 14:31:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.672 14:31:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.930 14:31:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.930 14:31:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:19.930 14:31:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.930 14:31:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.930 14:31:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.189 14:31:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.189 14:31:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:20.189 14:31:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.189 14:31:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.189 14:31:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.755 14:31:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.755 14:31:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:20.755 14:31:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.755 14:31:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.756 14:31:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.013 14:31:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.013 14:31:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:21.013 14:31:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.013 14:31:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.013 14:31:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.270 14:31:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.270 14:31:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:21.270 14:31:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.270 14:31:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.270 14:31:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.528 14:31:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.528 14:31:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:21.528 14:31:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.528 14:31:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.528 14:31:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.786 14:31:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.786 14:31:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:21.786 14:31:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.786 14:31:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.786 14:31:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.352 14:31:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.352 14:31:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:22.352 14:31:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.352 14:31:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.352 14:31:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.610 14:31:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.610 14:31:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:22.610 14:31:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.610 14:31:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.610 14:31:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.868 14:31:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.868 14:31:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:22.868 14:31:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.868 14:31:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.868 14:31:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.126 14:31:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.126 14:31:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:23.126 14:31:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.126 14:31:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.126 14:31:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.692 14:31:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.692 14:31:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:23.692 14:31:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.692 14:31:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.692 14:31:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.951 14:31:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.951 14:31:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:23.951 14:31:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.951 14:31:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.951 14:31:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.209 14:31:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.209 14:31:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:24.209 14:31:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.209 14:31:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.209 14:31:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.468 14:31:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.468 14:31:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:24.468 14:31:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.468 14:31:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.468 14:31:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.732 14:31:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.732 14:31:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:24.732 14:31:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.732 14:31:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.732 14:31:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.297 14:31:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.297 14:31:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:25.297 14:31:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.297 14:31:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.298 14:31:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.556 14:31:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.556 14:31:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:25.556 14:31:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.556 14:31:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.556 14:31:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.814 14:31:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.814 14:31:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:25.814 14:31:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.814 14:31:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.814 14:31:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.071 14:31:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.071 14:31:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:26.072 14:31:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.072 14:31:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.072 14:31:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.329 14:31:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.329 14:31:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:26.329 14:31:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.329 14:31:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.329 14:31:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.587 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:26.845 14:31:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.845 14:31:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87894 00:13:26.845 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (87894) - No such process 00:13:26.845 14:31:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 87894 00:13:26.845 14:31:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:26.845 14:31:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:26.845 14:31:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:26.845 14:31:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:26.845 14:31:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:26.845 14:31:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:26.845 14:31:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:26.845 14:31:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:26.845 14:31:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:26.845 rmmod nvme_tcp 00:13:26.845 rmmod nvme_fabrics 00:13:26.845 rmmod nvme_keyring 00:13:26.845 14:31:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 87841 ']' 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 87841 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 87841 ']' 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 87841 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87841 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:26.845 killing process with pid 87841 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87841' 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 87841 00:13:26.845 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 87841 00:13:27.103 14:31:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:27.104 14:31:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:27.104 14:31:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:27.104 14:31:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:27.104 14:31:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:27.104 14:31:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.104 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.104 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.104 14:31:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:27.104 00:13:27.104 real 0m12.294s 00:13:27.104 user 0m41.060s 00:13:27.104 sys 0m3.374s 00:13:27.104 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.104 14:31:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.104 ************************************ 00:13:27.104 END TEST nvmf_connect_stress 00:13:27.104 ************************************ 00:13:27.104 14:31:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:27.104 14:31:39 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:27.104 14:31:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:27.104 14:31:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.104 14:31:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.104 ************************************ 00:13:27.104 START TEST nvmf_fused_ordering 00:13:27.104 ************************************ 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:27.104 * Looking for test storage... 00:13:27.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:27.104 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:27.105 Cannot find device "nvmf_tgt_br" 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:27.105 Cannot find device "nvmf_tgt_br2" 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:27.105 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:27.363 Cannot find device "nvmf_tgt_br" 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:27.363 Cannot find device "nvmf_tgt_br2" 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:27.363 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:27.363 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:27.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:13:27.363 00:13:27.363 --- 10.0.0.2 ping statistics --- 00:13:27.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.363 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:27.363 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:27.363 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:13:27.363 00:13:27.363 --- 10.0.0.3 ping statistics --- 00:13:27.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.363 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:27.363 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:27.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:13:27.622 00:13:27.622 --- 10.0.0.1 ping statistics --- 00:13:27.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.622 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=88219 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 88219 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 88219 ']' 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.622 14:31:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.622 [2024-07-10 14:31:39.760919] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:13:27.622 [2024-07-10 14:31:39.761024] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.622 [2024-07-10 14:31:39.885927] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:27.622 [2024-07-10 14:31:39.904828] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.880 [2024-07-10 14:31:39.943938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.880 [2024-07-10 14:31:39.943994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.880 [2024-07-10 14:31:39.944008] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.880 [2024-07-10 14:31:39.944019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.880 [2024-07-10 14:31:39.944027] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.880 [2024-07-10 14:31:39.944056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.880 [2024-07-10 14:31:40.072168] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.880 [2024-07-10 14:31:40.096278] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.880 NULL1 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.880 14:31:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:27.880 [2024-07-10 14:31:40.162466] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:13:27.880 [2024-07-10 14:31:40.162522] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88256 ] 00:13:28.138 [2024-07-10 14:31:40.282848] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:28.397 Attached to nqn.2016-06.io.spdk:cnode1 00:13:28.397 Namespace ID: 1 size: 1GB 00:13:28.397 fused_ordering(0) 00:13:28.397 fused_ordering(1) 00:13:28.397 fused_ordering(2) 00:13:28.397 fused_ordering(3) 00:13:28.397 fused_ordering(4) 00:13:28.397 fused_ordering(5) 00:13:28.397 fused_ordering(6) 00:13:28.397 fused_ordering(7) 00:13:28.397 fused_ordering(8) 00:13:28.397 fused_ordering(9) 00:13:28.397 fused_ordering(10) 00:13:28.397 fused_ordering(11) 00:13:28.397 fused_ordering(12) 00:13:28.397 fused_ordering(13) 00:13:28.397 fused_ordering(14) 00:13:28.397 fused_ordering(15) 00:13:28.397 fused_ordering(16) 00:13:28.397 fused_ordering(17) 00:13:28.397 fused_ordering(18) 00:13:28.397 fused_ordering(19) 00:13:28.397 fused_ordering(20) 00:13:28.397 fused_ordering(21) 00:13:28.397 fused_ordering(22) 00:13:28.397 fused_ordering(23) 00:13:28.397 fused_ordering(24) 00:13:28.397 fused_ordering(25) 00:13:28.397 fused_ordering(26) 00:13:28.397 fused_ordering(27) 00:13:28.397 fused_ordering(28) 00:13:28.397 fused_ordering(29) 00:13:28.397 fused_ordering(30) 00:13:28.397 fused_ordering(31) 00:13:28.397 fused_ordering(32) 00:13:28.397 fused_ordering(33) 00:13:28.397 fused_ordering(34) 00:13:28.397 fused_ordering(35) 00:13:28.397 fused_ordering(36) 00:13:28.397 fused_ordering(37) 00:13:28.397 fused_ordering(38) 00:13:28.397 fused_ordering(39) 00:13:28.397 fused_ordering(40) 00:13:28.397 fused_ordering(41) 00:13:28.397 fused_ordering(42) 00:13:28.397 fused_ordering(43) 00:13:28.397 fused_ordering(44) 00:13:28.397 fused_ordering(45) 00:13:28.397 fused_ordering(46) 00:13:28.397 fused_ordering(47) 00:13:28.397 fused_ordering(48) 00:13:28.397 fused_ordering(49) 00:13:28.397 fused_ordering(50) 00:13:28.397 fused_ordering(51) 00:13:28.397 fused_ordering(52) 00:13:28.397 fused_ordering(53) 00:13:28.397 fused_ordering(54) 00:13:28.397 fused_ordering(55) 00:13:28.397 fused_ordering(56) 00:13:28.397 fused_ordering(57) 00:13:28.397 fused_ordering(58) 00:13:28.397 fused_ordering(59) 00:13:28.397 fused_ordering(60) 00:13:28.397 fused_ordering(61) 00:13:28.397 fused_ordering(62) 00:13:28.397 fused_ordering(63) 00:13:28.397 fused_ordering(64) 00:13:28.397 fused_ordering(65) 00:13:28.397 fused_ordering(66) 00:13:28.397 fused_ordering(67) 00:13:28.397 fused_ordering(68) 00:13:28.397 fused_ordering(69) 00:13:28.397 fused_ordering(70) 00:13:28.397 fused_ordering(71) 00:13:28.397 fused_ordering(72) 00:13:28.397 fused_ordering(73) 00:13:28.397 fused_ordering(74) 00:13:28.397 fused_ordering(75) 00:13:28.397 fused_ordering(76) 00:13:28.397 fused_ordering(77) 00:13:28.397 fused_ordering(78) 00:13:28.397 fused_ordering(79) 00:13:28.397 fused_ordering(80) 00:13:28.397 fused_ordering(81) 00:13:28.397 fused_ordering(82) 00:13:28.397 fused_ordering(83) 00:13:28.397 fused_ordering(84) 00:13:28.397 fused_ordering(85) 00:13:28.397 fused_ordering(86) 00:13:28.397 fused_ordering(87) 00:13:28.397 fused_ordering(88) 00:13:28.397 fused_ordering(89) 00:13:28.397 fused_ordering(90) 00:13:28.397 fused_ordering(91) 00:13:28.397 fused_ordering(92) 00:13:28.397 fused_ordering(93) 00:13:28.397 fused_ordering(94) 00:13:28.397 fused_ordering(95) 00:13:28.397 fused_ordering(96) 00:13:28.397 fused_ordering(97) 00:13:28.397 fused_ordering(98) 00:13:28.397 fused_ordering(99) 00:13:28.397 fused_ordering(100) 00:13:28.397 fused_ordering(101) 00:13:28.397 fused_ordering(102) 00:13:28.397 fused_ordering(103) 00:13:28.397 fused_ordering(104) 00:13:28.397 fused_ordering(105) 00:13:28.397 fused_ordering(106) 00:13:28.397 fused_ordering(107) 00:13:28.397 fused_ordering(108) 00:13:28.397 fused_ordering(109) 00:13:28.397 fused_ordering(110) 00:13:28.397 fused_ordering(111) 00:13:28.397 fused_ordering(112) 00:13:28.397 fused_ordering(113) 00:13:28.397 fused_ordering(114) 00:13:28.397 fused_ordering(115) 00:13:28.397 fused_ordering(116) 00:13:28.397 fused_ordering(117) 00:13:28.397 fused_ordering(118) 00:13:28.397 fused_ordering(119) 00:13:28.397 fused_ordering(120) 00:13:28.397 fused_ordering(121) 00:13:28.397 fused_ordering(122) 00:13:28.397 fused_ordering(123) 00:13:28.397 fused_ordering(124) 00:13:28.397 fused_ordering(125) 00:13:28.397 fused_ordering(126) 00:13:28.397 fused_ordering(127) 00:13:28.397 fused_ordering(128) 00:13:28.397 fused_ordering(129) 00:13:28.397 fused_ordering(130) 00:13:28.397 fused_ordering(131) 00:13:28.397 fused_ordering(132) 00:13:28.397 fused_ordering(133) 00:13:28.397 fused_ordering(134) 00:13:28.397 fused_ordering(135) 00:13:28.397 fused_ordering(136) 00:13:28.397 fused_ordering(137) 00:13:28.397 fused_ordering(138) 00:13:28.397 fused_ordering(139) 00:13:28.397 fused_ordering(140) 00:13:28.397 fused_ordering(141) 00:13:28.397 fused_ordering(142) 00:13:28.397 fused_ordering(143) 00:13:28.397 fused_ordering(144) 00:13:28.397 fused_ordering(145) 00:13:28.397 fused_ordering(146) 00:13:28.397 fused_ordering(147) 00:13:28.397 fused_ordering(148) 00:13:28.397 fused_ordering(149) 00:13:28.397 fused_ordering(150) 00:13:28.397 fused_ordering(151) 00:13:28.397 fused_ordering(152) 00:13:28.397 fused_ordering(153) 00:13:28.397 fused_ordering(154) 00:13:28.397 fused_ordering(155) 00:13:28.397 fused_ordering(156) 00:13:28.397 fused_ordering(157) 00:13:28.397 fused_ordering(158) 00:13:28.397 fused_ordering(159) 00:13:28.397 fused_ordering(160) 00:13:28.397 fused_ordering(161) 00:13:28.397 fused_ordering(162) 00:13:28.397 fused_ordering(163) 00:13:28.397 fused_ordering(164) 00:13:28.397 fused_ordering(165) 00:13:28.397 fused_ordering(166) 00:13:28.397 fused_ordering(167) 00:13:28.397 fused_ordering(168) 00:13:28.397 fused_ordering(169) 00:13:28.397 fused_ordering(170) 00:13:28.397 fused_ordering(171) 00:13:28.397 fused_ordering(172) 00:13:28.397 fused_ordering(173) 00:13:28.397 fused_ordering(174) 00:13:28.397 fused_ordering(175) 00:13:28.397 fused_ordering(176) 00:13:28.397 fused_ordering(177) 00:13:28.397 fused_ordering(178) 00:13:28.397 fused_ordering(179) 00:13:28.397 fused_ordering(180) 00:13:28.397 fused_ordering(181) 00:13:28.397 fused_ordering(182) 00:13:28.397 fused_ordering(183) 00:13:28.397 fused_ordering(184) 00:13:28.397 fused_ordering(185) 00:13:28.397 fused_ordering(186) 00:13:28.397 fused_ordering(187) 00:13:28.397 fused_ordering(188) 00:13:28.397 fused_ordering(189) 00:13:28.397 fused_ordering(190) 00:13:28.397 fused_ordering(191) 00:13:28.397 fused_ordering(192) 00:13:28.397 fused_ordering(193) 00:13:28.397 fused_ordering(194) 00:13:28.397 fused_ordering(195) 00:13:28.397 fused_ordering(196) 00:13:28.397 fused_ordering(197) 00:13:28.397 fused_ordering(198) 00:13:28.397 fused_ordering(199) 00:13:28.397 fused_ordering(200) 00:13:28.397 fused_ordering(201) 00:13:28.397 fused_ordering(202) 00:13:28.397 fused_ordering(203) 00:13:28.397 fused_ordering(204) 00:13:28.397 fused_ordering(205) 00:13:28.657 fused_ordering(206) 00:13:28.657 fused_ordering(207) 00:13:28.657 fused_ordering(208) 00:13:28.657 fused_ordering(209) 00:13:28.657 fused_ordering(210) 00:13:28.657 fused_ordering(211) 00:13:28.657 fused_ordering(212) 00:13:28.657 fused_ordering(213) 00:13:28.657 fused_ordering(214) 00:13:28.657 fused_ordering(215) 00:13:28.657 fused_ordering(216) 00:13:28.657 fused_ordering(217) 00:13:28.657 fused_ordering(218) 00:13:28.657 fused_ordering(219) 00:13:28.657 fused_ordering(220) 00:13:28.657 fused_ordering(221) 00:13:28.657 fused_ordering(222) 00:13:28.657 fused_ordering(223) 00:13:28.657 fused_ordering(224) 00:13:28.657 fused_ordering(225) 00:13:28.657 fused_ordering(226) 00:13:28.657 fused_ordering(227) 00:13:28.657 fused_ordering(228) 00:13:28.657 fused_ordering(229) 00:13:28.657 fused_ordering(230) 00:13:28.657 fused_ordering(231) 00:13:28.657 fused_ordering(232) 00:13:28.657 fused_ordering(233) 00:13:28.657 fused_ordering(234) 00:13:28.657 fused_ordering(235) 00:13:28.657 fused_ordering(236) 00:13:28.657 fused_ordering(237) 00:13:28.657 fused_ordering(238) 00:13:28.657 fused_ordering(239) 00:13:28.657 fused_ordering(240) 00:13:28.657 fused_ordering(241) 00:13:28.657 fused_ordering(242) 00:13:28.657 fused_ordering(243) 00:13:28.657 fused_ordering(244) 00:13:28.657 fused_ordering(245) 00:13:28.657 fused_ordering(246) 00:13:28.657 fused_ordering(247) 00:13:28.657 fused_ordering(248) 00:13:28.657 fused_ordering(249) 00:13:28.657 fused_ordering(250) 00:13:28.657 fused_ordering(251) 00:13:28.657 fused_ordering(252) 00:13:28.657 fused_ordering(253) 00:13:28.657 fused_ordering(254) 00:13:28.657 fused_ordering(255) 00:13:28.657 fused_ordering(256) 00:13:28.657 fused_ordering(257) 00:13:28.657 fused_ordering(258) 00:13:28.657 fused_ordering(259) 00:13:28.657 fused_ordering(260) 00:13:28.657 fused_ordering(261) 00:13:28.657 fused_ordering(262) 00:13:28.657 fused_ordering(263) 00:13:28.657 fused_ordering(264) 00:13:28.657 fused_ordering(265) 00:13:28.657 fused_ordering(266) 00:13:28.657 fused_ordering(267) 00:13:28.657 fused_ordering(268) 00:13:28.657 fused_ordering(269) 00:13:28.657 fused_ordering(270) 00:13:28.657 fused_ordering(271) 00:13:28.657 fused_ordering(272) 00:13:28.657 fused_ordering(273) 00:13:28.657 fused_ordering(274) 00:13:28.657 fused_ordering(275) 00:13:28.657 fused_ordering(276) 00:13:28.657 fused_ordering(277) 00:13:28.657 fused_ordering(278) 00:13:28.657 fused_ordering(279) 00:13:28.657 fused_ordering(280) 00:13:28.657 fused_ordering(281) 00:13:28.657 fused_ordering(282) 00:13:28.657 fused_ordering(283) 00:13:28.657 fused_ordering(284) 00:13:28.657 fused_ordering(285) 00:13:28.657 fused_ordering(286) 00:13:28.657 fused_ordering(287) 00:13:28.657 fused_ordering(288) 00:13:28.657 fused_ordering(289) 00:13:28.657 fused_ordering(290) 00:13:28.657 fused_ordering(291) 00:13:28.657 fused_ordering(292) 00:13:28.657 fused_ordering(293) 00:13:28.657 fused_ordering(294) 00:13:28.657 fused_ordering(295) 00:13:28.657 fused_ordering(296) 00:13:28.657 fused_ordering(297) 00:13:28.657 fused_ordering(298) 00:13:28.657 fused_ordering(299) 00:13:28.657 fused_ordering(300) 00:13:28.657 fused_ordering(301) 00:13:28.657 fused_ordering(302) 00:13:28.657 fused_ordering(303) 00:13:28.657 fused_ordering(304) 00:13:28.657 fused_ordering(305) 00:13:28.657 fused_ordering(306) 00:13:28.657 fused_ordering(307) 00:13:28.657 fused_ordering(308) 00:13:28.657 fused_ordering(309) 00:13:28.657 fused_ordering(310) 00:13:28.657 fused_ordering(311) 00:13:28.657 fused_ordering(312) 00:13:28.657 fused_ordering(313) 00:13:28.657 fused_ordering(314) 00:13:28.657 fused_ordering(315) 00:13:28.657 fused_ordering(316) 00:13:28.657 fused_ordering(317) 00:13:28.657 fused_ordering(318) 00:13:28.657 fused_ordering(319) 00:13:28.657 fused_ordering(320) 00:13:28.657 fused_ordering(321) 00:13:28.657 fused_ordering(322) 00:13:28.657 fused_ordering(323) 00:13:28.657 fused_ordering(324) 00:13:28.657 fused_ordering(325) 00:13:28.657 fused_ordering(326) 00:13:28.657 fused_ordering(327) 00:13:28.657 fused_ordering(328) 00:13:28.657 fused_ordering(329) 00:13:28.657 fused_ordering(330) 00:13:28.657 fused_ordering(331) 00:13:28.657 fused_ordering(332) 00:13:28.657 fused_ordering(333) 00:13:28.657 fused_ordering(334) 00:13:28.657 fused_ordering(335) 00:13:28.657 fused_ordering(336) 00:13:28.657 fused_ordering(337) 00:13:28.657 fused_ordering(338) 00:13:28.657 fused_ordering(339) 00:13:28.657 fused_ordering(340) 00:13:28.657 fused_ordering(341) 00:13:28.657 fused_ordering(342) 00:13:28.657 fused_ordering(343) 00:13:28.657 fused_ordering(344) 00:13:28.657 fused_ordering(345) 00:13:28.657 fused_ordering(346) 00:13:28.657 fused_ordering(347) 00:13:28.657 fused_ordering(348) 00:13:28.657 fused_ordering(349) 00:13:28.657 fused_ordering(350) 00:13:28.657 fused_ordering(351) 00:13:28.657 fused_ordering(352) 00:13:28.657 fused_ordering(353) 00:13:28.657 fused_ordering(354) 00:13:28.657 fused_ordering(355) 00:13:28.657 fused_ordering(356) 00:13:28.657 fused_ordering(357) 00:13:28.657 fused_ordering(358) 00:13:28.657 fused_ordering(359) 00:13:28.657 fused_ordering(360) 00:13:28.657 fused_ordering(361) 00:13:28.657 fused_ordering(362) 00:13:28.657 fused_ordering(363) 00:13:28.657 fused_ordering(364) 00:13:28.657 fused_ordering(365) 00:13:28.657 fused_ordering(366) 00:13:28.657 fused_ordering(367) 00:13:28.657 fused_ordering(368) 00:13:28.657 fused_ordering(369) 00:13:28.657 fused_ordering(370) 00:13:28.657 fused_ordering(371) 00:13:28.657 fused_ordering(372) 00:13:28.657 fused_ordering(373) 00:13:28.657 fused_ordering(374) 00:13:28.657 fused_ordering(375) 00:13:28.657 fused_ordering(376) 00:13:28.657 fused_ordering(377) 00:13:28.657 fused_ordering(378) 00:13:28.657 fused_ordering(379) 00:13:28.657 fused_ordering(380) 00:13:28.657 fused_ordering(381) 00:13:28.657 fused_ordering(382) 00:13:28.657 fused_ordering(383) 00:13:28.657 fused_ordering(384) 00:13:28.657 fused_ordering(385) 00:13:28.657 fused_ordering(386) 00:13:28.657 fused_ordering(387) 00:13:28.657 fused_ordering(388) 00:13:28.657 fused_ordering(389) 00:13:28.657 fused_ordering(390) 00:13:28.657 fused_ordering(391) 00:13:28.657 fused_ordering(392) 00:13:28.657 fused_ordering(393) 00:13:28.657 fused_ordering(394) 00:13:28.657 fused_ordering(395) 00:13:28.657 fused_ordering(396) 00:13:28.657 fused_ordering(397) 00:13:28.657 fused_ordering(398) 00:13:28.657 fused_ordering(399) 00:13:28.657 fused_ordering(400) 00:13:28.657 fused_ordering(401) 00:13:28.657 fused_ordering(402) 00:13:28.657 fused_ordering(403) 00:13:28.657 fused_ordering(404) 00:13:28.657 fused_ordering(405) 00:13:28.657 fused_ordering(406) 00:13:28.657 fused_ordering(407) 00:13:28.657 fused_ordering(408) 00:13:28.657 fused_ordering(409) 00:13:28.657 fused_ordering(410) 00:13:29.225 fused_ordering(411) 00:13:29.225 fused_ordering(412) 00:13:29.225 fused_ordering(413) 00:13:29.225 fused_ordering(414) 00:13:29.225 fused_ordering(415) 00:13:29.225 fused_ordering(416) 00:13:29.225 fused_ordering(417) 00:13:29.225 fused_ordering(418) 00:13:29.225 fused_ordering(419) 00:13:29.225 fused_ordering(420) 00:13:29.225 fused_ordering(421) 00:13:29.225 fused_ordering(422) 00:13:29.225 fused_ordering(423) 00:13:29.225 fused_ordering(424) 00:13:29.225 fused_ordering(425) 00:13:29.225 fused_ordering(426) 00:13:29.225 fused_ordering(427) 00:13:29.225 fused_ordering(428) 00:13:29.225 fused_ordering(429) 00:13:29.225 fused_ordering(430) 00:13:29.225 fused_ordering(431) 00:13:29.225 fused_ordering(432) 00:13:29.225 fused_ordering(433) 00:13:29.225 fused_ordering(434) 00:13:29.225 fused_ordering(435) 00:13:29.225 fused_ordering(436) 00:13:29.225 fused_ordering(437) 00:13:29.225 fused_ordering(438) 00:13:29.225 fused_ordering(439) 00:13:29.225 fused_ordering(440) 00:13:29.225 fused_ordering(441) 00:13:29.225 fused_ordering(442) 00:13:29.225 fused_ordering(443) 00:13:29.225 fused_ordering(444) 00:13:29.225 fused_ordering(445) 00:13:29.225 fused_ordering(446) 00:13:29.225 fused_ordering(447) 00:13:29.225 fused_ordering(448) 00:13:29.225 fused_ordering(449) 00:13:29.225 fused_ordering(450) 00:13:29.225 fused_ordering(451) 00:13:29.225 fused_ordering(452) 00:13:29.225 fused_ordering(453) 00:13:29.225 fused_ordering(454) 00:13:29.225 fused_ordering(455) 00:13:29.225 fused_ordering(456) 00:13:29.225 fused_ordering(457) 00:13:29.225 fused_ordering(458) 00:13:29.225 fused_ordering(459) 00:13:29.225 fused_ordering(460) 00:13:29.225 fused_ordering(461) 00:13:29.225 fused_ordering(462) 00:13:29.225 fused_ordering(463) 00:13:29.225 fused_ordering(464) 00:13:29.225 fused_ordering(465) 00:13:29.225 fused_ordering(466) 00:13:29.225 fused_ordering(467) 00:13:29.225 fused_ordering(468) 00:13:29.225 fused_ordering(469) 00:13:29.225 fused_ordering(470) 00:13:29.225 fused_ordering(471) 00:13:29.225 fused_ordering(472) 00:13:29.225 fused_ordering(473) 00:13:29.225 fused_ordering(474) 00:13:29.225 fused_ordering(475) 00:13:29.225 fused_ordering(476) 00:13:29.225 fused_ordering(477) 00:13:29.225 fused_ordering(478) 00:13:29.225 fused_ordering(479) 00:13:29.225 fused_ordering(480) 00:13:29.225 fused_ordering(481) 00:13:29.225 fused_ordering(482) 00:13:29.225 fused_ordering(483) 00:13:29.225 fused_ordering(484) 00:13:29.225 fused_ordering(485) 00:13:29.225 fused_ordering(486) 00:13:29.225 fused_ordering(487) 00:13:29.225 fused_ordering(488) 00:13:29.225 fused_ordering(489) 00:13:29.225 fused_ordering(490) 00:13:29.225 fused_ordering(491) 00:13:29.225 fused_ordering(492) 00:13:29.225 fused_ordering(493) 00:13:29.225 fused_ordering(494) 00:13:29.225 fused_ordering(495) 00:13:29.225 fused_ordering(496) 00:13:29.225 fused_ordering(497) 00:13:29.225 fused_ordering(498) 00:13:29.225 fused_ordering(499) 00:13:29.225 fused_ordering(500) 00:13:29.225 fused_ordering(501) 00:13:29.225 fused_ordering(502) 00:13:29.225 fused_ordering(503) 00:13:29.225 fused_ordering(504) 00:13:29.225 fused_ordering(505) 00:13:29.225 fused_ordering(506) 00:13:29.225 fused_ordering(507) 00:13:29.225 fused_ordering(508) 00:13:29.225 fused_ordering(509) 00:13:29.225 fused_ordering(510) 00:13:29.225 fused_ordering(511) 00:13:29.225 fused_ordering(512) 00:13:29.225 fused_ordering(513) 00:13:29.226 fused_ordering(514) 00:13:29.226 fused_ordering(515) 00:13:29.226 fused_ordering(516) 00:13:29.226 fused_ordering(517) 00:13:29.226 fused_ordering(518) 00:13:29.226 fused_ordering(519) 00:13:29.226 fused_ordering(520) 00:13:29.226 fused_ordering(521) 00:13:29.226 fused_ordering(522) 00:13:29.226 fused_ordering(523) 00:13:29.226 fused_ordering(524) 00:13:29.226 fused_ordering(525) 00:13:29.226 fused_ordering(526) 00:13:29.226 fused_ordering(527) 00:13:29.226 fused_ordering(528) 00:13:29.226 fused_ordering(529) 00:13:29.226 fused_ordering(530) 00:13:29.226 fused_ordering(531) 00:13:29.226 fused_ordering(532) 00:13:29.226 fused_ordering(533) 00:13:29.226 fused_ordering(534) 00:13:29.226 fused_ordering(535) 00:13:29.226 fused_ordering(536) 00:13:29.226 fused_ordering(537) 00:13:29.226 fused_ordering(538) 00:13:29.226 fused_ordering(539) 00:13:29.226 fused_ordering(540) 00:13:29.226 fused_ordering(541) 00:13:29.226 fused_ordering(542) 00:13:29.226 fused_ordering(543) 00:13:29.226 fused_ordering(544) 00:13:29.226 fused_ordering(545) 00:13:29.226 fused_ordering(546) 00:13:29.226 fused_ordering(547) 00:13:29.226 fused_ordering(548) 00:13:29.226 fused_ordering(549) 00:13:29.226 fused_ordering(550) 00:13:29.226 fused_ordering(551) 00:13:29.226 fused_ordering(552) 00:13:29.226 fused_ordering(553) 00:13:29.226 fused_ordering(554) 00:13:29.226 fused_ordering(555) 00:13:29.226 fused_ordering(556) 00:13:29.226 fused_ordering(557) 00:13:29.226 fused_ordering(558) 00:13:29.226 fused_ordering(559) 00:13:29.226 fused_ordering(560) 00:13:29.226 fused_ordering(561) 00:13:29.226 fused_ordering(562) 00:13:29.226 fused_ordering(563) 00:13:29.226 fused_ordering(564) 00:13:29.226 fused_ordering(565) 00:13:29.226 fused_ordering(566) 00:13:29.226 fused_ordering(567) 00:13:29.226 fused_ordering(568) 00:13:29.226 fused_ordering(569) 00:13:29.226 fused_ordering(570) 00:13:29.226 fused_ordering(571) 00:13:29.226 fused_ordering(572) 00:13:29.226 fused_ordering(573) 00:13:29.226 fused_ordering(574) 00:13:29.226 fused_ordering(575) 00:13:29.226 fused_ordering(576) 00:13:29.226 fused_ordering(577) 00:13:29.226 fused_ordering(578) 00:13:29.226 fused_ordering(579) 00:13:29.226 fused_ordering(580) 00:13:29.226 fused_ordering(581) 00:13:29.226 fused_ordering(582) 00:13:29.226 fused_ordering(583) 00:13:29.226 fused_ordering(584) 00:13:29.226 fused_ordering(585) 00:13:29.226 fused_ordering(586) 00:13:29.226 fused_ordering(587) 00:13:29.226 fused_ordering(588) 00:13:29.226 fused_ordering(589) 00:13:29.226 fused_ordering(590) 00:13:29.226 fused_ordering(591) 00:13:29.226 fused_ordering(592) 00:13:29.226 fused_ordering(593) 00:13:29.226 fused_ordering(594) 00:13:29.226 fused_ordering(595) 00:13:29.226 fused_ordering(596) 00:13:29.226 fused_ordering(597) 00:13:29.226 fused_ordering(598) 00:13:29.226 fused_ordering(599) 00:13:29.226 fused_ordering(600) 00:13:29.226 fused_ordering(601) 00:13:29.226 fused_ordering(602) 00:13:29.226 fused_ordering(603) 00:13:29.226 fused_ordering(604) 00:13:29.226 fused_ordering(605) 00:13:29.226 fused_ordering(606) 00:13:29.226 fused_ordering(607) 00:13:29.226 fused_ordering(608) 00:13:29.226 fused_ordering(609) 00:13:29.226 fused_ordering(610) 00:13:29.226 fused_ordering(611) 00:13:29.226 fused_ordering(612) 00:13:29.226 fused_ordering(613) 00:13:29.226 fused_ordering(614) 00:13:29.226 fused_ordering(615) 00:13:29.485 fused_ordering(616) 00:13:29.485 fused_ordering(617) 00:13:29.485 fused_ordering(618) 00:13:29.485 fused_ordering(619) 00:13:29.485 fused_ordering(620) 00:13:29.485 fused_ordering(621) 00:13:29.485 fused_ordering(622) 00:13:29.485 fused_ordering(623) 00:13:29.485 fused_ordering(624) 00:13:29.485 fused_ordering(625) 00:13:29.485 fused_ordering(626) 00:13:29.485 fused_ordering(627) 00:13:29.485 fused_ordering(628) 00:13:29.485 fused_ordering(629) 00:13:29.485 fused_ordering(630) 00:13:29.485 fused_ordering(631) 00:13:29.485 fused_ordering(632) 00:13:29.485 fused_ordering(633) 00:13:29.485 fused_ordering(634) 00:13:29.485 fused_ordering(635) 00:13:29.485 fused_ordering(636) 00:13:29.485 fused_ordering(637) 00:13:29.485 fused_ordering(638) 00:13:29.485 fused_ordering(639) 00:13:29.485 fused_ordering(640) 00:13:29.485 fused_ordering(641) 00:13:29.485 fused_ordering(642) 00:13:29.485 fused_ordering(643) 00:13:29.485 fused_ordering(644) 00:13:29.485 fused_ordering(645) 00:13:29.485 fused_ordering(646) 00:13:29.485 fused_ordering(647) 00:13:29.485 fused_ordering(648) 00:13:29.485 fused_ordering(649) 00:13:29.485 fused_ordering(650) 00:13:29.485 fused_ordering(651) 00:13:29.485 fused_ordering(652) 00:13:29.485 fused_ordering(653) 00:13:29.485 fused_ordering(654) 00:13:29.485 fused_ordering(655) 00:13:29.485 fused_ordering(656) 00:13:29.485 fused_ordering(657) 00:13:29.485 fused_ordering(658) 00:13:29.485 fused_ordering(659) 00:13:29.485 fused_ordering(660) 00:13:29.485 fused_ordering(661) 00:13:29.485 fused_ordering(662) 00:13:29.485 fused_ordering(663) 00:13:29.485 fused_ordering(664) 00:13:29.485 fused_ordering(665) 00:13:29.485 fused_ordering(666) 00:13:29.485 fused_ordering(667) 00:13:29.485 fused_ordering(668) 00:13:29.485 fused_ordering(669) 00:13:29.485 fused_ordering(670) 00:13:29.485 fused_ordering(671) 00:13:29.485 fused_ordering(672) 00:13:29.485 fused_ordering(673) 00:13:29.485 fused_ordering(674) 00:13:29.485 fused_ordering(675) 00:13:29.485 fused_ordering(676) 00:13:29.485 fused_ordering(677) 00:13:29.485 fused_ordering(678) 00:13:29.485 fused_ordering(679) 00:13:29.485 fused_ordering(680) 00:13:29.485 fused_ordering(681) 00:13:29.485 fused_ordering(682) 00:13:29.485 fused_ordering(683) 00:13:29.485 fused_ordering(684) 00:13:29.485 fused_ordering(685) 00:13:29.485 fused_ordering(686) 00:13:29.485 fused_ordering(687) 00:13:29.485 fused_ordering(688) 00:13:29.485 fused_ordering(689) 00:13:29.485 fused_ordering(690) 00:13:29.485 fused_ordering(691) 00:13:29.485 fused_ordering(692) 00:13:29.485 fused_ordering(693) 00:13:29.485 fused_ordering(694) 00:13:29.485 fused_ordering(695) 00:13:29.485 fused_ordering(696) 00:13:29.485 fused_ordering(697) 00:13:29.485 fused_ordering(698) 00:13:29.485 fused_ordering(699) 00:13:29.485 fused_ordering(700) 00:13:29.485 fused_ordering(701) 00:13:29.485 fused_ordering(702) 00:13:29.485 fused_ordering(703) 00:13:29.485 fused_ordering(704) 00:13:29.485 fused_ordering(705) 00:13:29.485 fused_ordering(706) 00:13:29.485 fused_ordering(707) 00:13:29.485 fused_ordering(708) 00:13:29.485 fused_ordering(709) 00:13:29.485 fused_ordering(710) 00:13:29.485 fused_ordering(711) 00:13:29.485 fused_ordering(712) 00:13:29.485 fused_ordering(713) 00:13:29.485 fused_ordering(714) 00:13:29.485 fused_ordering(715) 00:13:29.485 fused_ordering(716) 00:13:29.485 fused_ordering(717) 00:13:29.485 fused_ordering(718) 00:13:29.485 fused_ordering(719) 00:13:29.485 fused_ordering(720) 00:13:29.485 fused_ordering(721) 00:13:29.485 fused_ordering(722) 00:13:29.485 fused_ordering(723) 00:13:29.485 fused_ordering(724) 00:13:29.485 fused_ordering(725) 00:13:29.485 fused_ordering(726) 00:13:29.485 fused_ordering(727) 00:13:29.485 fused_ordering(728) 00:13:29.485 fused_ordering(729) 00:13:29.485 fused_ordering(730) 00:13:29.485 fused_ordering(731) 00:13:29.485 fused_ordering(732) 00:13:29.485 fused_ordering(733) 00:13:29.485 fused_ordering(734) 00:13:29.485 fused_ordering(735) 00:13:29.485 fused_ordering(736) 00:13:29.485 fused_ordering(737) 00:13:29.485 fused_ordering(738) 00:13:29.485 fused_ordering(739) 00:13:29.485 fused_ordering(740) 00:13:29.485 fused_ordering(741) 00:13:29.485 fused_ordering(742) 00:13:29.485 fused_ordering(743) 00:13:29.485 fused_ordering(744) 00:13:29.485 fused_ordering(745) 00:13:29.485 fused_ordering(746) 00:13:29.485 fused_ordering(747) 00:13:29.485 fused_ordering(748) 00:13:29.485 fused_ordering(749) 00:13:29.485 fused_ordering(750) 00:13:29.485 fused_ordering(751) 00:13:29.485 fused_ordering(752) 00:13:29.485 fused_ordering(753) 00:13:29.485 fused_ordering(754) 00:13:29.485 fused_ordering(755) 00:13:29.485 fused_ordering(756) 00:13:29.485 fused_ordering(757) 00:13:29.485 fused_ordering(758) 00:13:29.485 fused_ordering(759) 00:13:29.485 fused_ordering(760) 00:13:29.485 fused_ordering(761) 00:13:29.485 fused_ordering(762) 00:13:29.485 fused_ordering(763) 00:13:29.485 fused_ordering(764) 00:13:29.485 fused_ordering(765) 00:13:29.485 fused_ordering(766) 00:13:29.485 fused_ordering(767) 00:13:29.485 fused_ordering(768) 00:13:29.485 fused_ordering(769) 00:13:29.485 fused_ordering(770) 00:13:29.485 fused_ordering(771) 00:13:29.485 fused_ordering(772) 00:13:29.485 fused_ordering(773) 00:13:29.485 fused_ordering(774) 00:13:29.485 fused_ordering(775) 00:13:29.485 fused_ordering(776) 00:13:29.485 fused_ordering(777) 00:13:29.485 fused_ordering(778) 00:13:29.485 fused_ordering(779) 00:13:29.485 fused_ordering(780) 00:13:29.485 fused_ordering(781) 00:13:29.485 fused_ordering(782) 00:13:29.485 fused_ordering(783) 00:13:29.485 fused_ordering(784) 00:13:29.485 fused_ordering(785) 00:13:29.485 fused_ordering(786) 00:13:29.485 fused_ordering(787) 00:13:29.485 fused_ordering(788) 00:13:29.485 fused_ordering(789) 00:13:29.485 fused_ordering(790) 00:13:29.485 fused_ordering(791) 00:13:29.485 fused_ordering(792) 00:13:29.485 fused_ordering(793) 00:13:29.485 fused_ordering(794) 00:13:29.485 fused_ordering(795) 00:13:29.485 fused_ordering(796) 00:13:29.485 fused_ordering(797) 00:13:29.485 fused_ordering(798) 00:13:29.485 fused_ordering(799) 00:13:29.485 fused_ordering(800) 00:13:29.485 fused_ordering(801) 00:13:29.485 fused_ordering(802) 00:13:29.485 fused_ordering(803) 00:13:29.485 fused_ordering(804) 00:13:29.485 fused_ordering(805) 00:13:29.485 fused_ordering(806) 00:13:29.485 fused_ordering(807) 00:13:29.485 fused_ordering(808) 00:13:29.485 fused_ordering(809) 00:13:29.485 fused_ordering(810) 00:13:29.485 fused_ordering(811) 00:13:29.485 fused_ordering(812) 00:13:29.485 fused_ordering(813) 00:13:29.485 fused_ordering(814) 00:13:29.485 fused_ordering(815) 00:13:29.485 fused_ordering(816) 00:13:29.485 fused_ordering(817) 00:13:29.485 fused_ordering(818) 00:13:29.485 fused_ordering(819) 00:13:29.486 fused_ordering(820) 00:13:30.052 fused_ordering(821) 00:13:30.052 fused_ordering(822) 00:13:30.052 fused_ordering(823) 00:13:30.052 fused_ordering(824) 00:13:30.052 fused_ordering(825) 00:13:30.052 fused_ordering(826) 00:13:30.052 fused_ordering(827) 00:13:30.052 fused_ordering(828) 00:13:30.052 fused_ordering(829) 00:13:30.052 fused_ordering(830) 00:13:30.052 fused_ordering(831) 00:13:30.052 fused_ordering(832) 00:13:30.052 fused_ordering(833) 00:13:30.052 fused_ordering(834) 00:13:30.052 fused_ordering(835) 00:13:30.052 fused_ordering(836) 00:13:30.052 fused_ordering(837) 00:13:30.052 fused_ordering(838) 00:13:30.052 fused_ordering(839) 00:13:30.052 fused_ordering(840) 00:13:30.052 fused_ordering(841) 00:13:30.052 fused_ordering(842) 00:13:30.052 fused_ordering(843) 00:13:30.052 fused_ordering(844) 00:13:30.052 fused_ordering(845) 00:13:30.052 fused_ordering(846) 00:13:30.052 fused_ordering(847) 00:13:30.052 fused_ordering(848) 00:13:30.052 fused_ordering(849) 00:13:30.052 fused_ordering(850) 00:13:30.052 fused_ordering(851) 00:13:30.052 fused_ordering(852) 00:13:30.052 fused_ordering(853) 00:13:30.052 fused_ordering(854) 00:13:30.052 fused_ordering(855) 00:13:30.052 fused_ordering(856) 00:13:30.052 fused_ordering(857) 00:13:30.052 fused_ordering(858) 00:13:30.052 fused_ordering(859) 00:13:30.052 fused_ordering(860) 00:13:30.052 fused_ordering(861) 00:13:30.052 fused_ordering(862) 00:13:30.052 fused_ordering(863) 00:13:30.052 fused_ordering(864) 00:13:30.052 fused_ordering(865) 00:13:30.052 fused_ordering(866) 00:13:30.052 fused_ordering(867) 00:13:30.052 fused_ordering(868) 00:13:30.052 fused_ordering(869) 00:13:30.052 fused_ordering(870) 00:13:30.052 fused_ordering(871) 00:13:30.052 fused_ordering(872) 00:13:30.052 fused_ordering(873) 00:13:30.052 fused_ordering(874) 00:13:30.052 fused_ordering(875) 00:13:30.052 fused_ordering(876) 00:13:30.052 fused_ordering(877) 00:13:30.052 fused_ordering(878) 00:13:30.052 fused_ordering(879) 00:13:30.052 fused_ordering(880) 00:13:30.052 fused_ordering(881) 00:13:30.052 fused_ordering(882) 00:13:30.052 fused_ordering(883) 00:13:30.052 fused_ordering(884) 00:13:30.052 fused_ordering(885) 00:13:30.052 fused_ordering(886) 00:13:30.052 fused_ordering(887) 00:13:30.052 fused_ordering(888) 00:13:30.052 fused_ordering(889) 00:13:30.052 fused_ordering(890) 00:13:30.052 fused_ordering(891) 00:13:30.052 fused_ordering(892) 00:13:30.052 fused_ordering(893) 00:13:30.052 fused_ordering(894) 00:13:30.052 fused_ordering(895) 00:13:30.052 fused_ordering(896) 00:13:30.052 fused_ordering(897) 00:13:30.052 fused_ordering(898) 00:13:30.052 fused_ordering(899) 00:13:30.052 fused_ordering(900) 00:13:30.052 fused_ordering(901) 00:13:30.052 fused_ordering(902) 00:13:30.052 fused_ordering(903) 00:13:30.052 fused_ordering(904) 00:13:30.052 fused_ordering(905) 00:13:30.052 fused_ordering(906) 00:13:30.052 fused_ordering(907) 00:13:30.052 fused_ordering(908) 00:13:30.052 fused_ordering(909) 00:13:30.052 fused_ordering(910) 00:13:30.052 fused_ordering(911) 00:13:30.052 fused_ordering(912) 00:13:30.052 fused_ordering(913) 00:13:30.052 fused_ordering(914) 00:13:30.052 fused_ordering(915) 00:13:30.052 fused_ordering(916) 00:13:30.052 fused_ordering(917) 00:13:30.052 fused_ordering(918) 00:13:30.052 fused_ordering(919) 00:13:30.052 fused_ordering(920) 00:13:30.052 fused_ordering(921) 00:13:30.052 fused_ordering(922) 00:13:30.052 fused_ordering(923) 00:13:30.052 fused_ordering(924) 00:13:30.052 fused_ordering(925) 00:13:30.052 fused_ordering(926) 00:13:30.052 fused_ordering(927) 00:13:30.052 fused_ordering(928) 00:13:30.052 fused_ordering(929) 00:13:30.052 fused_ordering(930) 00:13:30.052 fused_ordering(931) 00:13:30.052 fused_ordering(932) 00:13:30.052 fused_ordering(933) 00:13:30.052 fused_ordering(934) 00:13:30.052 fused_ordering(935) 00:13:30.052 fused_ordering(936) 00:13:30.052 fused_ordering(937) 00:13:30.052 fused_ordering(938) 00:13:30.052 fused_ordering(939) 00:13:30.052 fused_ordering(940) 00:13:30.052 fused_ordering(941) 00:13:30.052 fused_ordering(942) 00:13:30.052 fused_ordering(943) 00:13:30.052 fused_ordering(944) 00:13:30.052 fused_ordering(945) 00:13:30.052 fused_ordering(946) 00:13:30.052 fused_ordering(947) 00:13:30.052 fused_ordering(948) 00:13:30.052 fused_ordering(949) 00:13:30.052 fused_ordering(950) 00:13:30.052 fused_ordering(951) 00:13:30.052 fused_ordering(952) 00:13:30.052 fused_ordering(953) 00:13:30.052 fused_ordering(954) 00:13:30.052 fused_ordering(955) 00:13:30.052 fused_ordering(956) 00:13:30.052 fused_ordering(957) 00:13:30.052 fused_ordering(958) 00:13:30.052 fused_ordering(959) 00:13:30.052 fused_ordering(960) 00:13:30.052 fused_ordering(961) 00:13:30.052 fused_ordering(962) 00:13:30.052 fused_ordering(963) 00:13:30.052 fused_ordering(964) 00:13:30.052 fused_ordering(965) 00:13:30.052 fused_ordering(966) 00:13:30.052 fused_ordering(967) 00:13:30.052 fused_ordering(968) 00:13:30.052 fused_ordering(969) 00:13:30.052 fused_ordering(970) 00:13:30.052 fused_ordering(971) 00:13:30.052 fused_ordering(972) 00:13:30.052 fused_ordering(973) 00:13:30.052 fused_ordering(974) 00:13:30.052 fused_ordering(975) 00:13:30.052 fused_ordering(976) 00:13:30.052 fused_ordering(977) 00:13:30.052 fused_ordering(978) 00:13:30.052 fused_ordering(979) 00:13:30.052 fused_ordering(980) 00:13:30.052 fused_ordering(981) 00:13:30.052 fused_ordering(982) 00:13:30.052 fused_ordering(983) 00:13:30.052 fused_ordering(984) 00:13:30.052 fused_ordering(985) 00:13:30.052 fused_ordering(986) 00:13:30.052 fused_ordering(987) 00:13:30.052 fused_ordering(988) 00:13:30.052 fused_ordering(989) 00:13:30.052 fused_ordering(990) 00:13:30.053 fused_ordering(991) 00:13:30.053 fused_ordering(992) 00:13:30.053 fused_ordering(993) 00:13:30.053 fused_ordering(994) 00:13:30.053 fused_ordering(995) 00:13:30.053 fused_ordering(996) 00:13:30.053 fused_ordering(997) 00:13:30.053 fused_ordering(998) 00:13:30.053 fused_ordering(999) 00:13:30.053 fused_ordering(1000) 00:13:30.053 fused_ordering(1001) 00:13:30.053 fused_ordering(1002) 00:13:30.053 fused_ordering(1003) 00:13:30.053 fused_ordering(1004) 00:13:30.053 fused_ordering(1005) 00:13:30.053 fused_ordering(1006) 00:13:30.053 fused_ordering(1007) 00:13:30.053 fused_ordering(1008) 00:13:30.053 fused_ordering(1009) 00:13:30.053 fused_ordering(1010) 00:13:30.053 fused_ordering(1011) 00:13:30.053 fused_ordering(1012) 00:13:30.053 fused_ordering(1013) 00:13:30.053 fused_ordering(1014) 00:13:30.053 fused_ordering(1015) 00:13:30.053 fused_ordering(1016) 00:13:30.053 fused_ordering(1017) 00:13:30.053 fused_ordering(1018) 00:13:30.053 fused_ordering(1019) 00:13:30.053 fused_ordering(1020) 00:13:30.053 fused_ordering(1021) 00:13:30.053 fused_ordering(1022) 00:13:30.053 fused_ordering(1023) 00:13:30.053 14:31:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:30.053 14:31:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:30.053 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:30.053 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:30.311 rmmod nvme_tcp 00:13:30.311 rmmod nvme_fabrics 00:13:30.311 rmmod nvme_keyring 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 88219 ']' 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 88219 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 88219 ']' 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 88219 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88219 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:30.311 killing process with pid 88219 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88219' 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 88219 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 88219 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:30.311 00:13:30.311 real 0m3.349s 00:13:30.311 user 0m4.079s 00:13:30.311 sys 0m1.288s 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.311 14:31:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.311 ************************************ 00:13:30.311 END TEST nvmf_fused_ordering 00:13:30.311 ************************************ 00:13:30.571 14:31:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:30.571 14:31:42 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:30.571 14:31:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:30.571 14:31:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.571 14:31:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:30.571 ************************************ 00:13:30.571 START TEST nvmf_delete_subsystem 00:13:30.571 ************************************ 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:30.571 * Looking for test storage... 00:13:30.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:30.571 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:30.572 Cannot find device "nvmf_tgt_br" 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:30.572 Cannot find device "nvmf_tgt_br2" 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:30.572 Cannot find device "nvmf_tgt_br" 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:30.572 Cannot find device "nvmf_tgt_br2" 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:13:30.572 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:30.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:30.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:30.832 14:31:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:30.832 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:30.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:13:30.832 00:13:30.832 --- 10.0.0.2 ping statistics --- 00:13:30.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.833 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:30.833 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:30.833 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:30.833 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:13:30.833 00:13:30.833 --- 10.0.0.3 ping statistics --- 00:13:30.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.833 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:30.833 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:30.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:13:30.833 00:13:30.833 --- 10.0.0.1 ping statistics --- 00:13:30.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.833 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:13:30.833 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.833 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:13:30.833 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:30.833 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.833 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:30.833 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:30.833 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.833 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:30.833 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.092 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:31.092 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.092 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:31.092 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.092 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=88436 00:13:31.092 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 88436 00:13:31.092 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:31.092 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 88436 ']' 00:13:31.092 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.092 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.092 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.093 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.093 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.093 [2024-07-10 14:31:43.209124] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:13:31.093 [2024-07-10 14:31:43.209240] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.093 [2024-07-10 14:31:43.335795] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:31.093 [2024-07-10 14:31:43.355225] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:31.351 [2024-07-10 14:31:43.391203] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.351 [2024-07-10 14:31:43.391460] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.351 [2024-07-10 14:31:43.391617] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.351 [2024-07-10 14:31:43.391752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.351 [2024-07-10 14:31:43.391790] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.351 [2024-07-10 14:31:43.393324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.351 [2024-07-10 14:31:43.393341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.351 [2024-07-10 14:31:43.515098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.351 [2024-07-10 14:31:43.539335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.351 NULL1 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.351 Delay0 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=88473 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:31.351 14:31:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:31.610 [2024-07-10 14:31:43.735882] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:33.514 14:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:33.514 14:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.514 14:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 starting I/O failed: -6 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 [2024-07-10 14:31:45.773758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbfb0000c00 is same with the state(5) to be set 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 starting I/O failed: -6 00:13:33.514 Read completed with error (sct=0, sc=8) 00:13:33.514 Write completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 starting I/O failed: -6 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 [2024-07-10 14:31:45.774490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef3e50 is same with the state(5) to be set 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Write completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:33.515 Read completed with error (sct=0, sc=8) 00:13:34.894 [2024-07-10 14:31:46.749929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef3c70 is same with the state(5) to be set 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 [2024-07-10 14:31:46.772887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eef0f0 is same with the state(5) to be set 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 [2024-07-10 14:31:46.773192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef4030 is same with the state(5) to be set 00:13:34.894 Initializing NVMe Controllers 00:13:34.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:34.894 Controller IO queue size 128, less than required. 00:13:34.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:34.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:34.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:34.894 Initialization complete. Launching workers. 00:13:34.894 ======================================================== 00:13:34.894 Latency(us) 00:13:34.894 Device Information : IOPS MiB/s Average min max 00:13:34.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.05 0.09 893434.13 641.90 1013649.78 00:13:34.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 171.20 0.08 907452.25 1388.47 2002865.38 00:13:34.894 ======================================================== 00:13:34.894 Total : 362.26 0.18 900059.13 641.90 2002865.38 00:13:34.894 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Read completed with error (sct=0, sc=8) 00:13:34.894 Write completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Write completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Write completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 [2024-07-10 14:31:46.773909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbfb000cfe0 is same with the state(5) to be set 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Write completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Write completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Write completed with error (sct=0, sc=8) 00:13:34.895 Write completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Write completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Write completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 Read completed with error (sct=0, sc=8) 00:13:34.895 [2024-07-10 14:31:46.774144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbfb000d740 is same with the state(5) to be set 00:13:34.895 [2024-07-10 14:31:46.774779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef3c70 (9): Bad file descriptor 00:13:34.895 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:34.895 14:31:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.895 14:31:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:34.895 14:31:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 88473 00:13:34.895 14:31:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 88473 00:13:35.154 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (88473) - No such process 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 88473 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 88473 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 88473 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:35.154 [2024-07-10 14:31:47.300612] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=88524 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88524 00:13:35.154 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:35.413 [2024-07-10 14:31:47.470011] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:35.672 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:35.672 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88524 00:13:35.672 14:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:36.239 14:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:36.239 14:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88524 00:13:36.239 14:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:36.810 14:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:36.810 14:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88524 00:13:36.810 14:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:37.069 14:31:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:37.069 14:31:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88524 00:13:37.069 14:31:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:37.710 14:31:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:37.710 14:31:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88524 00:13:37.710 14:31:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:38.277 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:38.277 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88524 00:13:38.277 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:38.277 Initializing NVMe Controllers 00:13:38.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:38.277 Controller IO queue size 128, less than required. 00:13:38.277 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:38.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:38.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:38.277 Initialization complete. Launching workers. 00:13:38.277 ======================================================== 00:13:38.277 Latency(us) 00:13:38.277 Device Information : IOPS MiB/s Average min max 00:13:38.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003406.78 1000112.18 1043192.11 00:13:38.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005294.97 1000347.62 1042270.33 00:13:38.277 ======================================================== 00:13:38.277 Total : 256.00 0.12 1004350.88 1000112.18 1043192.11 00:13:38.277 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88524 00:13:38.844 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (88524) - No such process 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 88524 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:38.844 rmmod nvme_tcp 00:13:38.844 rmmod nvme_fabrics 00:13:38.844 rmmod nvme_keyring 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 88436 ']' 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 88436 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 88436 ']' 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 88436 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88436 00:13:38.844 killing process with pid 88436 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88436' 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 88436 00:13:38.844 14:31:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 88436 00:13:38.844 14:31:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:38.844 14:31:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:38.844 14:31:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:38.844 14:31:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:38.844 14:31:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:38.844 14:31:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.844 14:31:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.844 14:31:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.103 14:31:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:39.103 00:13:39.103 real 0m8.497s 00:13:39.103 user 0m27.017s 00:13:39.103 sys 0m1.474s 00:13:39.103 ************************************ 00:13:39.103 END TEST nvmf_delete_subsystem 00:13:39.103 ************************************ 00:13:39.103 14:31:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:39.103 14:31:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:39.103 14:31:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:39.103 14:31:51 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:39.103 14:31:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:39.103 14:31:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.103 14:31:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:39.103 ************************************ 00:13:39.103 START TEST nvmf_ns_masking 00:13:39.103 ************************************ 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:39.103 * Looking for test storage... 00:13:39.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=888c0bdd-cb1d-4c64-937e-472e7eb318dd 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=019729d1-8fdd-4f1e-aa5c-f30ba2a1379c 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2dbb5d2c-00e7-4266-8e9c-05179538acd1 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:39.103 Cannot find device "nvmf_tgt_br" 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:39.103 Cannot find device "nvmf_tgt_br2" 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:13:39.103 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:39.104 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:39.104 Cannot find device "nvmf_tgt_br" 00:13:39.104 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:13:39.104 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:39.104 Cannot find device "nvmf_tgt_br2" 00:13:39.104 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:13:39.104 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:39.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:39.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:39.362 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:39.620 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:39.620 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:39.620 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:39.620 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:39.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:13:39.620 00:13:39.620 --- 10.0.0.2 ping statistics --- 00:13:39.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.620 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:13:39.620 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:39.620 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:39.620 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:13:39.620 00:13:39.621 --- 10.0.0.3 ping statistics --- 00:13:39.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.621 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:39.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:39.621 00:13:39.621 --- 10.0.0.1 ping statistics --- 00:13:39.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.621 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=88762 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 88762 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 88762 ']' 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.621 14:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.621 [2024-07-10 14:31:51.779049] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:13:39.621 [2024-07-10 14:31:51.779154] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.621 [2024-07-10 14:31:51.901277] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:39.879 [2024-07-10 14:31:51.919002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.879 [2024-07-10 14:31:51.965858] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.879 [2024-07-10 14:31:51.965945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.879 [2024-07-10 14:31:51.965969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.879 [2024-07-10 14:31:51.965988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.879 [2024-07-10 14:31:51.966003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.879 [2024-07-10 14:31:51.966055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.879 14:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.879 14:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:39.879 14:31:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.879 14:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:39.880 14:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.880 14:31:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.880 14:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:40.140 [2024-07-10 14:31:52.380678] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.140 14:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:40.140 14:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:40.140 14:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:40.707 Malloc1 00:13:40.707 14:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:40.964 Malloc2 00:13:40.964 14:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:41.221 14:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:41.479 14:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.737 [2024-07-10 14:31:53.867200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.737 14:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:41.737 14:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2dbb5d2c-00e7-4266-8e9c-05179538acd1 -a 10.0.0.2 -s 4420 -i 4 00:13:41.737 14:31:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:41.738 14:31:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:41.738 14:31:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.738 14:31:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:41.738 14:31:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:44.284 [ 0]:0x1 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=78beaf91aaea43d2aab313d1e2f16dbf 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 78beaf91aaea43d2aab313d1e2f16dbf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:44.284 [ 0]:0x1 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=78beaf91aaea43d2aab313d1e2f16dbf 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 78beaf91aaea43d2aab313d1e2f16dbf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.284 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:44.284 [ 1]:0x2 00:13:44.285 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:44.285 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.285 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673f339ac71548968958b25a8cda6d7c 00:13:44.285 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673f339ac71548968958b25a8cda6d7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.285 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:44.285 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.543 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.802 14:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:45.059 14:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:45.059 14:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2dbb5d2c-00e7-4266-8e9c-05179538acd1 -a 10.0.0.2 -s 4420 -i 4 00:13:45.059 14:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:45.059 14:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:45.059 14:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.059 14:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:45.059 14:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:45.059 14:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.591 [ 0]:0x2 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673f339ac71548968958b25a8cda6d7c 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673f339ac71548968958b25a8cda6d7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:47.591 [ 0]:0x1 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=78beaf91aaea43d2aab313d1e2f16dbf 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 78beaf91aaea43d2aab313d1e2f16dbf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.591 [ 1]:0x2 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673f339ac71548968958b25a8cda6d7c 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673f339ac71548968958b25a8cda6d7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.591 14:31:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:47.851 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:47.851 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:47.851 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:47.851 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:47.851 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.851 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:47.851 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.851 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:47.851 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.851 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:47.851 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.851 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:48.119 [ 0]:0x2 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673f339ac71548968958b25a8cda6d7c 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673f339ac71548968958b25a8cda6d7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:48.119 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.120 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:48.392 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:48.392 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2dbb5d2c-00e7-4266-8e9c-05179538acd1 -a 10.0.0.2 -s 4420 -i 4 00:13:48.392 14:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:48.392 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:48.392 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:48.392 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:48.392 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:48.392 14:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:50.923 [ 0]:0x1 00:13:50.923 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:50.924 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:50.924 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=78beaf91aaea43d2aab313d1e2f16dbf 00:13:50.924 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 78beaf91aaea43d2aab313d1e2f16dbf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:50.924 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:50.924 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:50.924 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:50.924 [ 1]:0x2 00:13:50.924 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:50.924 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:50.924 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673f339ac71548968958b25a8cda6d7c 00:13:50.924 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673f339ac71548968958b25a8cda6d7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:50.924 14:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:50.924 [ 0]:0x2 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:50.924 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673f339ac71548968958b25a8cda6d7c 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673f339ac71548968958b25a8cda6d7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:51.182 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:51.441 [2024-07-10 14:32:03.481230] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:51.441 2024/07/10 14:32:03 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:13:51.441 request: 00:13:51.441 { 00:13:51.441 "method": "nvmf_ns_remove_host", 00:13:51.441 "params": { 00:13:51.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.441 "nsid": 2, 00:13:51.441 "host": "nqn.2016-06.io.spdk:host1" 00:13:51.441 } 00:13:51.441 } 00:13:51.441 Got JSON-RPC error response 00:13:51.441 GoRPCClient: error on JSON-RPC call 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:51.441 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:51.442 [ 0]:0x2 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673f339ac71548968958b25a8cda6d7c 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673f339ac71548968958b25a8cda6d7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=89127 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 89127 /var/tmp/host.sock 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 89127 ']' 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.442 14:32:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:51.442 [2024-07-10 14:32:03.726627] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:13:51.442 [2024-07-10 14:32:03.726724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89127 ] 00:13:51.700 [2024-07-10 14:32:03.848535] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:51.700 [2024-07-10 14:32:03.867442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.700 [2024-07-10 14:32:03.907899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.959 14:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.959 14:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:51.959 14:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.217 14:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.475 14:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 888c0bdd-cb1d-4c64-937e-472e7eb318dd 00:13:52.475 14:32:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:52.475 14:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 888C0BDDCB1D4C64937E472E7EB318DD -i 00:13:52.733 14:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 019729d1-8fdd-4f1e-aa5c-f30ba2a1379c 00:13:52.733 14:32:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:52.733 14:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 019729D18FDD4F1EAA5CF30BA2A1379C -i 00:13:52.991 14:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:53.249 14:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:53.508 14:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:53.508 14:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:53.767 nvme0n1 00:13:53.767 14:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:53.767 14:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:54.026 nvme1n2 00:13:54.026 14:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:54.026 14:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:54.026 14:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:54.026 14:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:54.026 14:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:54.284 14:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:54.543 14:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:54.543 14:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:54.543 14:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:54.801 14:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 888c0bdd-cb1d-4c64-937e-472e7eb318dd == \8\8\8\c\0\b\d\d\-\c\b\1\d\-\4\c\6\4\-\9\3\7\e\-\4\7\2\e\7\e\b\3\1\8\d\d ]] 00:13:54.801 14:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:54.801 14:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:54.801 14:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:55.060 14:32:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 019729d1-8fdd-4f1e-aa5c-f30ba2a1379c == \0\1\9\7\2\9\d\1\-\8\f\d\d\-\4\f\1\e\-\a\a\5\c\-\f\3\0\b\a\2\a\1\3\7\9\c ]] 00:13:55.060 14:32:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 89127 00:13:55.060 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 89127 ']' 00:13:55.060 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 89127 00:13:55.060 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:55.060 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:55.060 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89127 00:13:55.060 killing process with pid 89127 00:13:55.060 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:55.060 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:55.060 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89127' 00:13:55.060 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 89127 00:13:55.060 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 89127 00:13:55.318 14:32:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:55.577 rmmod nvme_tcp 00:13:55.577 rmmod nvme_fabrics 00:13:55.577 rmmod nvme_keyring 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 88762 ']' 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 88762 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 88762 ']' 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 88762 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88762 00:13:55.577 killing process with pid 88762 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88762' 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 88762 00:13:55.577 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 88762 00:13:55.837 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:55.837 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:55.837 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:55.837 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.837 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:55.837 14:32:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.837 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.837 14:32:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.837 14:32:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:55.837 00:13:55.837 real 0m16.843s 00:13:55.837 user 0m26.873s 00:13:55.837 sys 0m2.480s 00:13:55.837 14:32:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.837 14:32:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:55.837 ************************************ 00:13:55.837 END TEST nvmf_ns_masking 00:13:55.837 ************************************ 00:13:55.837 14:32:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:55.837 14:32:08 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:13:55.837 14:32:08 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:13:55.837 14:32:08 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:55.837 14:32:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:55.837 14:32:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.837 14:32:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:55.837 ************************************ 00:13:55.837 START TEST nvmf_host_management 00:13:55.837 ************************************ 00:13:55.837 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:56.096 * Looking for test storage... 00:13:56.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:56.096 14:32:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:56.096 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:56.096 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.096 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:56.097 Cannot find device "nvmf_tgt_br" 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:56.097 Cannot find device "nvmf_tgt_br2" 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:56.097 Cannot find device "nvmf_tgt_br" 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:56.097 Cannot find device "nvmf_tgt_br2" 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:56.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:56.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:56.097 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:56.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:13:56.354 00:13:56.354 --- 10.0.0.2 ping statistics --- 00:13:56.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.354 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:56.354 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:56.354 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:56.354 00:13:56.354 --- 10.0.0.3 ping statistics --- 00:13:56.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.354 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:56.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:56.354 00:13:56.354 --- 10.0.0.1 ping statistics --- 00:13:56.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.354 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=89472 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 89472 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 89472 ']' 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.354 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.354 [2024-07-10 14:32:08.617645] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:13:56.354 [2024-07-10 14:32:08.617736] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.611 [2024-07-10 14:32:08.740656] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:56.611 [2024-07-10 14:32:08.761001] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:56.611 [2024-07-10 14:32:08.802728] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.611 [2024-07-10 14:32:08.802779] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.611 [2024-07-10 14:32:08.802792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.611 [2024-07-10 14:32:08.802803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.611 [2024-07-10 14:32:08.802811] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.611 [2024-07-10 14:32:08.803869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.611 [2024-07-10 14:32:08.804178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.611 [2024-07-10 14:32:08.804336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:56.611 [2024-07-10 14:32:08.804343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.611 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.611 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:56.611 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.611 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:56.611 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.869 14:32:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.869 14:32:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:56.869 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.869 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.869 [2024-07-10 14:32:08.926187] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.869 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.869 14:32:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:56.869 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:56.869 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.869 14:32:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:56.869 14:32:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:56.869 14:32:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:56.869 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.869 14:32:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.869 Malloc0 00:13:56.869 [2024-07-10 14:32:08.997021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=89532 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 89532 /var/tmp/bdevperf.sock 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 89532 ']' 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:56.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:56.869 { 00:13:56.869 "params": { 00:13:56.869 "name": "Nvme$subsystem", 00:13:56.869 "trtype": "$TEST_TRANSPORT", 00:13:56.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:56.869 "adrfam": "ipv4", 00:13:56.869 "trsvcid": "$NVMF_PORT", 00:13:56.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:56.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:56.869 "hdgst": ${hdgst:-false}, 00:13:56.869 "ddgst": ${ddgst:-false} 00:13:56.869 }, 00:13:56.869 "method": "bdev_nvme_attach_controller" 00:13:56.869 } 00:13:56.869 EOF 00:13:56.869 )") 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:56.869 14:32:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:56.869 "params": { 00:13:56.869 "name": "Nvme0", 00:13:56.869 "trtype": "tcp", 00:13:56.869 "traddr": "10.0.0.2", 00:13:56.869 "adrfam": "ipv4", 00:13:56.869 "trsvcid": "4420", 00:13:56.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:56.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:56.869 "hdgst": false, 00:13:56.869 "ddgst": false 00:13:56.869 }, 00:13:56.869 "method": "bdev_nvme_attach_controller" 00:13:56.869 }' 00:13:56.869 [2024-07-10 14:32:09.098230] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:13:56.869 [2024-07-10 14:32:09.098338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89532 ] 00:13:57.127 [2024-07-10 14:32:09.219938] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:57.127 [2024-07-10 14:32:09.235595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.127 [2024-07-10 14:32:09.271196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.127 Running I/O for 10 seconds... 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:13:57.384 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.643 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.643 [2024-07-10 14:32:09.843566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.843989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.843998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.844010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.844020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.844032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.844041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.844058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.844068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.844080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.844089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.844101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.844111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.844122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.844132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.844143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.844153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.844164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.844175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.643 [2024-07-10 14:32:09.844187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.643 [2024-07-10 14:32:09.844196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.844986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.844996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.845008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.644 [2024-07-10 14:32:09.845018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.644 [2024-07-10 14:32:09.845029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141da80 is same with the state(5) to be set 00:13:57.644 [2024-07-10 14:32:09.845076] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x141da80 was disconnected and freed. reset controller. 00:13:57.644 [2024-07-10 14:32:09.846274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:57.644 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.644 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:57.644 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.644 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.644 task offset: 81792 on job bdev=Nvme0n1 fails 00:13:57.644 00:13:57.644 Latency(us) 00:13:57.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.644 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:57.644 Job: Nvme0n1 ended in about 0.44 seconds with error 00:13:57.644 Verification LBA range: start 0x0 length 0x400 00:13:57.644 Nvme0n1 : 0.44 1320.91 82.56 146.77 0.00 42139.37 5779.08 40274.85 00:13:57.644 =================================================================================================================== 00:13:57.645 Total : 1320.91 82.56 146.77 0.00 42139.37 5779.08 40274.85 00:13:57.645 [2024-07-10 14:32:09.848371] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:57.645 [2024-07-10 14:32:09.848405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9c850 (9): Bad file descriptor 00:13:57.645 14:32:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.645 14:32:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:57.645 [2024-07-10 14:32:09.859730] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:58.576 14:32:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 89532 00:13:58.576 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (89532) - No such process 00:13:58.576 14:32:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:58.576 14:32:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:58.576 14:32:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:58.576 14:32:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:58.576 14:32:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:58.576 14:32:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:58.576 14:32:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:58.576 14:32:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:58.576 { 00:13:58.576 "params": { 00:13:58.576 "name": "Nvme$subsystem", 00:13:58.576 "trtype": "$TEST_TRANSPORT", 00:13:58.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:58.576 "adrfam": "ipv4", 00:13:58.576 "trsvcid": "$NVMF_PORT", 00:13:58.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:58.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:58.576 "hdgst": ${hdgst:-false}, 00:13:58.576 "ddgst": ${ddgst:-false} 00:13:58.576 }, 00:13:58.576 "method": "bdev_nvme_attach_controller" 00:13:58.576 } 00:13:58.576 EOF 00:13:58.576 )") 00:13:58.576 14:32:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:58.835 14:32:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:58.835 14:32:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:58.835 14:32:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:58.835 "params": { 00:13:58.835 "name": "Nvme0", 00:13:58.835 "trtype": "tcp", 00:13:58.835 "traddr": "10.0.0.2", 00:13:58.835 "adrfam": "ipv4", 00:13:58.835 "trsvcid": "4420", 00:13:58.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:58.835 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:58.835 "hdgst": false, 00:13:58.835 "ddgst": false 00:13:58.835 }, 00:13:58.835 "method": "bdev_nvme_attach_controller" 00:13:58.835 }' 00:13:58.835 [2024-07-10 14:32:10.911131] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:13:58.835 [2024-07-10 14:32:10.911265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89578 ] 00:13:58.835 [2024-07-10 14:32:11.038501] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:58.835 [2024-07-10 14:32:11.053166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.835 [2024-07-10 14:32:11.089047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.093 Running I/O for 1 seconds... 00:14:00.025 00:14:00.025 Latency(us) 00:14:00.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.025 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:00.025 Verification LBA range: start 0x0 length 0x400 00:14:00.025 Nvme0n1 : 1.04 1541.55 96.35 0.00 0.00 40697.62 5093.93 36700.16 00:14:00.025 =================================================================================================================== 00:14:00.025 Total : 1541.55 96.35 0.00 0.00 40697.62 5093.93 36700.16 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:00.283 rmmod nvme_tcp 00:14:00.283 rmmod nvme_fabrics 00:14:00.283 rmmod nvme_keyring 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 89472 ']' 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 89472 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 89472 ']' 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 89472 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89472 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:00.283 killing process with pid 89472 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89472' 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 89472 00:14:00.283 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 89472 00:14:00.541 [2024-07-10 14:32:12.646439] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:00.541 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.541 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.541 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.541 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.541 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.541 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.541 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.541 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.541 14:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:00.541 14:32:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:00.541 00:14:00.541 real 0m4.621s 00:14:00.541 user 0m17.557s 00:14:00.541 sys 0m1.131s 00:14:00.541 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.541 ************************************ 00:14:00.541 END TEST nvmf_host_management 00:14:00.541 14:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:00.541 ************************************ 00:14:00.541 14:32:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:00.541 14:32:12 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:00.541 14:32:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:00.541 14:32:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.541 14:32:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:00.541 ************************************ 00:14:00.541 START TEST nvmf_lvol 00:14:00.541 ************************************ 00:14:00.541 14:32:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:00.799 * Looking for test storage... 00:14:00.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:00.799 Cannot find device "nvmf_tgt_br" 00:14:00.799 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.800 Cannot find device "nvmf_tgt_br2" 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:00.800 Cannot find device "nvmf_tgt_br" 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:00.800 Cannot find device "nvmf_tgt_br2" 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.800 14:32:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:00.800 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:01.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:14:01.058 00:14:01.058 --- 10.0.0.2 ping statistics --- 00:14:01.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.058 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:01.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:01.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.137 ms 00:14:01.058 00:14:01.058 --- 10.0.0.3 ping statistics --- 00:14:01.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.058 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:01.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:01.058 00:14:01.058 --- 10.0.0.1 ping statistics --- 00:14:01.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.058 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.058 14:32:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:01.059 14:32:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:01.059 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=89784 00:14:01.059 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 89784 00:14:01.059 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:01.059 14:32:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 89784 ']' 00:14:01.059 14:32:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.059 14:32:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.059 14:32:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.059 14:32:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.059 14:32:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:01.059 [2024-07-10 14:32:13.263659] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:14:01.059 [2024-07-10 14:32:13.263786] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.317 [2024-07-10 14:32:13.389331] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:01.317 [2024-07-10 14:32:13.409543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:01.317 [2024-07-10 14:32:13.450442] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.317 [2024-07-10 14:32:13.450535] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.317 [2024-07-10 14:32:13.450561] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.317 [2024-07-10 14:32:13.450578] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.317 [2024-07-10 14:32:13.450591] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.317 [2024-07-10 14:32:13.450682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.317 [2024-07-10 14:32:13.450789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.317 [2024-07-10 14:32:13.450809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.317 14:32:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.317 14:32:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:01.317 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:01.317 14:32:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:01.317 14:32:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:01.317 14:32:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.317 14:32:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:01.575 [2024-07-10 14:32:13.826612] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.575 14:32:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:01.833 14:32:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:02.091 14:32:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:02.350 14:32:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:02.350 14:32:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:02.608 14:32:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:02.865 14:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c4c90c8b-b74b-439f-89bd-5473637d8e00 00:14:02.865 14:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c4c90c8b-b74b-439f-89bd-5473637d8e00 lvol 20 00:14:03.123 14:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=540a0e79-442a-4840-8de5-7f6255569b16 00:14:03.123 14:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:03.381 14:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 540a0e79-442a-4840-8de5-7f6255569b16 00:14:03.639 14:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:03.896 [2024-07-10 14:32:16.051663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.896 14:32:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:04.154 14:32:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=89913 00:14:04.154 14:32:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:04.154 14:32:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:05.087 14:32:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 540a0e79-442a-4840-8de5-7f6255569b16 MY_SNAPSHOT 00:14:05.653 14:32:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ab8a0bbd-e659-4921-a9cc-da7cf15e4ff2 00:14:05.653 14:32:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 540a0e79-442a-4840-8de5-7f6255569b16 30 00:14:05.910 14:32:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone ab8a0bbd-e659-4921-a9cc-da7cf15e4ff2 MY_CLONE 00:14:06.168 14:32:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6672bb82-30fe-45b7-843e-de0539c1c175 00:14:06.168 14:32:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 6672bb82-30fe-45b7-843e-de0539c1c175 00:14:06.733 14:32:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 89913 00:14:14.844 Initializing NVMe Controllers 00:14:14.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:14.844 Controller IO queue size 128, less than required. 00:14:14.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:14.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:14.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:14.844 Initialization complete. Launching workers. 00:14:14.844 ======================================================== 00:14:14.844 Latency(us) 00:14:14.844 Device Information : IOPS MiB/s Average min max 00:14:14.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10333.76 40.37 12393.22 267.96 103542.42 00:14:14.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10331.16 40.36 12392.52 2548.82 64235.18 00:14:14.844 ======================================================== 00:14:14.844 Total : 20664.92 80.72 12392.87 267.96 103542.42 00:14:14.844 00:14:14.844 14:32:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:14.844 14:32:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 540a0e79-442a-4840-8de5-7f6255569b16 00:14:15.102 14:32:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4c90c8b-b74b-439f-89bd-5473637d8e00 00:14:15.360 14:32:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:15.360 14:32:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:15.360 14:32:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:15.360 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.360 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:15.360 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.361 rmmod nvme_tcp 00:14:15.361 rmmod nvme_fabrics 00:14:15.361 rmmod nvme_keyring 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 89784 ']' 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 89784 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 89784 ']' 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 89784 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89784 00:14:15.361 killing process with pid 89784 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89784' 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 89784 00:14:15.361 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 89784 00:14:15.620 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.620 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.620 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.620 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.620 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.620 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.620 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.620 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.620 14:32:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:15.620 00:14:15.620 real 0m15.070s 00:14:15.620 user 1m4.240s 00:14:15.620 sys 0m3.664s 00:14:15.620 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:15.620 14:32:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:15.620 ************************************ 00:14:15.620 END TEST nvmf_lvol 00:14:15.620 ************************************ 00:14:15.620 14:32:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:15.620 14:32:27 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:15.620 14:32:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:15.620 14:32:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.620 14:32:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.620 ************************************ 00:14:15.620 START TEST nvmf_lvs_grow 00:14:15.620 ************************************ 00:14:15.620 14:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:15.879 * Looking for test storage... 00:14:15.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:15.879 14:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:15.879 Cannot find device "nvmf_tgt_br" 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:15.879 Cannot find device "nvmf_tgt_br2" 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:15.879 Cannot find device "nvmf_tgt_br" 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:15.879 Cannot find device "nvmf_tgt_br2" 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:15.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:15.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:15.879 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:16.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:16.138 00:14:16.138 --- 10.0.0.2 ping statistics --- 00:14:16.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.138 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:16.138 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:16.138 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:14:16.138 00:14:16.138 --- 10.0.0.3 ping statistics --- 00:14:16.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.138 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:16.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:16.138 00:14:16.138 --- 10.0.0.1 ping statistics --- 00:14:16.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.138 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=90278 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 90278 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 90278 ']' 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.138 14:32:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:16.396 [2024-07-10 14:32:28.434526] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:14:16.396 [2024-07-10 14:32:28.435091] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.396 [2024-07-10 14:32:28.555018] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:16.396 [2024-07-10 14:32:28.567344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.396 [2024-07-10 14:32:28.603597] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.396 [2024-07-10 14:32:28.603647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.396 [2024-07-10 14:32:28.603658] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.396 [2024-07-10 14:32:28.603666] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.396 [2024-07-10 14:32:28.603674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.396 [2024-07-10 14:32:28.603706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.396 14:32:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.396 14:32:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:16.396 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:16.396 14:32:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:16.396 14:32:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:16.654 14:32:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.654 14:32:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:16.913 [2024-07-10 14:32:29.012879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:16.913 ************************************ 00:14:16.913 START TEST lvs_grow_clean 00:14:16.913 ************************************ 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:16.913 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:17.171 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:17.171 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:17.429 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d9b3c025-01d0-429b-9e84-3ac8fa3e6f33 00:14:17.429 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9b3c025-01d0-429b-9e84-3ac8fa3e6f33 00:14:17.429 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:17.687 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:17.687 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:17.687 14:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d9b3c025-01d0-429b-9e84-3ac8fa3e6f33 lvol 150 00:14:17.945 14:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=18701281-74fe-46f8-af01-526d5251e011 00:14:17.945 14:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:17.945 14:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:18.257 [2024-07-10 14:32:30.480264] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:18.257 [2024-07-10 14:32:30.480360] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:18.257 true 00:14:18.257 14:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:18.257 14:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9b3c025-01d0-429b-9e84-3ac8fa3e6f33 00:14:18.515 14:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:18.515 14:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:18.773 14:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 18701281-74fe-46f8-af01-526d5251e011 00:14:19.339 14:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:19.339 [2024-07-10 14:32:31.560835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.339 14:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:19.597 14:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=90426 00:14:19.597 14:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:19.597 14:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:19.597 14:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 90426 /var/tmp/bdevperf.sock 00:14:19.597 14:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 90426 ']' 00:14:19.598 14:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.598 14:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.598 14:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.598 14:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.598 14:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:19.598 [2024-07-10 14:32:31.871170] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:14:19.598 [2024-07-10 14:32:31.871256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90426 ] 00:14:19.855 [2024-07-10 14:32:31.989428] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:19.855 [2024-07-10 14:32:32.007830] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.855 [2024-07-10 14:32:32.047823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.855 14:32:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.855 14:32:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:19.855 14:32:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:20.452 Nvme0n1 00:14:20.452 14:32:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:20.710 [ 00:14:20.710 { 00:14:20.710 "aliases": [ 00:14:20.710 "18701281-74fe-46f8-af01-526d5251e011" 00:14:20.710 ], 00:14:20.710 "assigned_rate_limits": { 00:14:20.710 "r_mbytes_per_sec": 0, 00:14:20.710 "rw_ios_per_sec": 0, 00:14:20.710 "rw_mbytes_per_sec": 0, 00:14:20.710 "w_mbytes_per_sec": 0 00:14:20.710 }, 00:14:20.710 "block_size": 4096, 00:14:20.710 "claimed": false, 00:14:20.710 "driver_specific": { 00:14:20.710 "mp_policy": "active_passive", 00:14:20.710 "nvme": [ 00:14:20.710 { 00:14:20.710 "ctrlr_data": { 00:14:20.710 "ana_reporting": false, 00:14:20.710 "cntlid": 1, 00:14:20.710 "firmware_revision": "24.09", 00:14:20.710 "model_number": "SPDK bdev Controller", 00:14:20.710 "multi_ctrlr": true, 00:14:20.710 "oacs": { 00:14:20.710 "firmware": 0, 00:14:20.710 "format": 0, 00:14:20.710 "ns_manage": 0, 00:14:20.710 "security": 0 00:14:20.710 }, 00:14:20.710 "serial_number": "SPDK0", 00:14:20.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:20.710 "vendor_id": "0x8086" 00:14:20.710 }, 00:14:20.710 "ns_data": { 00:14:20.710 "can_share": true, 00:14:20.710 "id": 1 00:14:20.710 }, 00:14:20.710 "trid": { 00:14:20.710 "adrfam": "IPv4", 00:14:20.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:20.710 "traddr": "10.0.0.2", 00:14:20.710 "trsvcid": "4420", 00:14:20.710 "trtype": "TCP" 00:14:20.710 }, 00:14:20.710 "vs": { 00:14:20.710 "nvme_version": "1.3" 00:14:20.710 } 00:14:20.710 } 00:14:20.710 ] 00:14:20.710 }, 00:14:20.710 "memory_domains": [ 00:14:20.710 { 00:14:20.710 "dma_device_id": "system", 00:14:20.710 "dma_device_type": 1 00:14:20.710 } 00:14:20.710 ], 00:14:20.710 "name": "Nvme0n1", 00:14:20.710 "num_blocks": 38912, 00:14:20.710 "product_name": "NVMe disk", 00:14:20.710 "supported_io_types": { 00:14:20.710 "abort": true, 00:14:20.710 "compare": true, 00:14:20.710 "compare_and_write": true, 00:14:20.710 "copy": true, 00:14:20.710 "flush": true, 00:14:20.710 "get_zone_info": false, 00:14:20.710 "nvme_admin": true, 00:14:20.710 "nvme_io": true, 00:14:20.710 "nvme_io_md": false, 00:14:20.710 "nvme_iov_md": false, 00:14:20.710 "read": true, 00:14:20.710 "reset": true, 00:14:20.710 "seek_data": false, 00:14:20.710 "seek_hole": false, 00:14:20.710 "unmap": true, 00:14:20.710 "write": true, 00:14:20.710 "write_zeroes": true, 00:14:20.710 "zcopy": false, 00:14:20.710 "zone_append": false, 00:14:20.710 "zone_management": false 00:14:20.710 }, 00:14:20.710 "uuid": "18701281-74fe-46f8-af01-526d5251e011", 00:14:20.710 "zoned": false 00:14:20.710 } 00:14:20.710 ] 00:14:20.710 14:32:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=90460 00:14:20.710 14:32:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:20.710 14:32:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:20.710 Running I/O for 10 seconds... 00:14:21.643 Latency(us) 00:14:21.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.643 Nvme0n1 : 1.00 8095.00 31.62 0.00 0.00 0.00 0.00 0.00 00:14:21.643 =================================================================================================================== 00:14:21.643 Total : 8095.00 31.62 0.00 0.00 0.00 0.00 0.00 00:14:21.643 00:14:22.576 14:32:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d9b3c025-01d0-429b-9e84-3ac8fa3e6f33 00:14:22.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.834 Nvme0n1 : 2.00 8202.50 32.04 0.00 0.00 0.00 0.00 0.00 00:14:22.834 =================================================================================================================== 00:14:22.834 Total : 8202.50 32.04 0.00 0.00 0.00 0.00 0.00 00:14:22.834 00:14:22.834 true 00:14:22.834 14:32:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:22.834 14:32:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9b3c025-01d0-429b-9e84-3ac8fa3e6f33 00:14:23.092 14:32:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:23.092 14:32:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:23.092 14:32:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 90460 00:14:23.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.749 Nvme0n1 : 3.00 8183.00 31.96 0.00 0.00 0.00 0.00 0.00 00:14:23.749 =================================================================================================================== 00:14:23.749 Total : 8183.00 31.96 0.00 0.00 0.00 0.00 0.00 00:14:23.749 00:14:24.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.684 Nvme0n1 : 4.00 8157.25 31.86 0.00 0.00 0.00 0.00 0.00 00:14:24.684 =================================================================================================================== 00:14:24.684 Total : 8157.25 31.86 0.00 0.00 0.00 0.00 0.00 00:14:24.684 00:14:25.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.618 Nvme0n1 : 5.00 8117.80 31.71 0.00 0.00 0.00 0.00 0.00 00:14:25.618 =================================================================================================================== 00:14:25.618 Total : 8117.80 31.71 0.00 0.00 0.00 0.00 0.00 00:14:25.618 00:14:26.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.992 Nvme0n1 : 6.00 8111.50 31.69 0.00 0.00 0.00 0.00 0.00 00:14:26.992 =================================================================================================================== 00:14:26.992 Total : 8111.50 31.69 0.00 0.00 0.00 0.00 0.00 00:14:26.992 00:14:27.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.925 Nvme0n1 : 7.00 8105.43 31.66 0.00 0.00 0.00 0.00 0.00 00:14:27.925 =================================================================================================================== 00:14:27.925 Total : 8105.43 31.66 0.00 0.00 0.00 0.00 0.00 00:14:27.925 00:14:28.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.858 Nvme0n1 : 8.00 8083.38 31.58 0.00 0.00 0.00 0.00 0.00 00:14:28.858 =================================================================================================================== 00:14:28.858 Total : 8083.38 31.58 0.00 0.00 0.00 0.00 0.00 00:14:28.858 00:14:29.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.843 Nvme0n1 : 9.00 8078.78 31.56 0.00 0.00 0.00 0.00 0.00 00:14:29.843 =================================================================================================================== 00:14:29.843 Total : 8078.78 31.56 0.00 0.00 0.00 0.00 0.00 00:14:29.843 00:14:30.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.775 Nvme0n1 : 10.00 8064.30 31.50 0.00 0.00 0.00 0.00 0.00 00:14:30.775 =================================================================================================================== 00:14:30.775 Total : 8064.30 31.50 0.00 0.00 0.00 0.00 0.00 00:14:30.775 00:14:30.775 00:14:30.775 Latency(us) 00:14:30.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.775 Nvme0n1 : 10.01 8067.24 31.51 0.00 0.00 15861.07 7745.16 39559.91 00:14:30.775 =================================================================================================================== 00:14:30.775 Total : 8067.24 31.51 0.00 0.00 15861.07 7745.16 39559.91 00:14:30.775 0 00:14:30.775 14:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 90426 00:14:30.775 14:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 90426 ']' 00:14:30.775 14:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 90426 00:14:30.775 14:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:30.775 14:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.775 14:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90426 00:14:30.775 14:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:30.775 14:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:30.775 killing process with pid 90426 00:14:30.775 14:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90426' 00:14:30.775 14:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 90426 00:14:30.775 Received shutdown signal, test time was about 10.000000 seconds 00:14:30.775 00:14:30.775 Latency(us) 00:14:30.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.775 =================================================================================================================== 00:14:30.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.775 14:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 90426 00:14:31.032 14:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.032 14:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:31.289 14:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9b3c025-01d0-429b-9e84-3ac8fa3e6f33 00:14:31.289 14:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:31.854 14:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:31.854 14:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:31.854 14:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:31.854 [2024-07-10 14:32:44.110943] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:32.112 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9b3c025-01d0-429b-9e84-3ac8fa3e6f33 00:14:32.112 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:32.112 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9b3c025-01d0-429b-9e84-3ac8fa3e6f33 00:14:32.112 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.112 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.112 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.112 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.112 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.112 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.112 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.112 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:32.112 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9b3c025-01d0-429b-9e84-3ac8fa3e6f33 00:14:32.370 2024/07/10 14:32:44 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:d9b3c025-01d0-429b-9e84-3ac8fa3e6f33], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:32.370 request: 00:14:32.370 { 00:14:32.370 "method": "bdev_lvol_get_lvstores", 00:14:32.370 "params": { 00:14:32.370 "uuid": "d9b3c025-01d0-429b-9e84-3ac8fa3e6f33" 00:14:32.370 } 00:14:32.370 } 00:14:32.370 Got JSON-RPC error response 00:14:32.370 GoRPCClient: error on JSON-RPC call 00:14:32.370 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:32.370 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:32.370 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:32.371 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:32.371 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:32.628 aio_bdev 00:14:32.628 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 18701281-74fe-46f8-af01-526d5251e011 00:14:32.628 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=18701281-74fe-46f8-af01-526d5251e011 00:14:32.628 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:32.628 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:32.628 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:32.628 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:32.629 14:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:32.886 14:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 18701281-74fe-46f8-af01-526d5251e011 -t 2000 00:14:33.144 [ 00:14:33.144 { 00:14:33.144 "aliases": [ 00:14:33.144 "lvs/lvol" 00:14:33.144 ], 00:14:33.144 "assigned_rate_limits": { 00:14:33.144 "r_mbytes_per_sec": 0, 00:14:33.144 "rw_ios_per_sec": 0, 00:14:33.144 "rw_mbytes_per_sec": 0, 00:14:33.144 "w_mbytes_per_sec": 0 00:14:33.144 }, 00:14:33.145 "block_size": 4096, 00:14:33.145 "claimed": false, 00:14:33.145 "driver_specific": { 00:14:33.145 "lvol": { 00:14:33.145 "base_bdev": "aio_bdev", 00:14:33.145 "clone": false, 00:14:33.145 "esnap_clone": false, 00:14:33.145 "lvol_store_uuid": "d9b3c025-01d0-429b-9e84-3ac8fa3e6f33", 00:14:33.145 "num_allocated_clusters": 38, 00:14:33.145 "snapshot": false, 00:14:33.145 "thin_provision": false 00:14:33.145 } 00:14:33.145 }, 00:14:33.145 "name": "18701281-74fe-46f8-af01-526d5251e011", 00:14:33.145 "num_blocks": 38912, 00:14:33.145 "product_name": "Logical Volume", 00:14:33.145 "supported_io_types": { 00:14:33.145 "abort": false, 00:14:33.145 "compare": false, 00:14:33.145 "compare_and_write": false, 00:14:33.145 "copy": false, 00:14:33.145 "flush": false, 00:14:33.145 "get_zone_info": false, 00:14:33.145 "nvme_admin": false, 00:14:33.145 "nvme_io": false, 00:14:33.145 "nvme_io_md": false, 00:14:33.145 "nvme_iov_md": false, 00:14:33.145 "read": true, 00:14:33.145 "reset": true, 00:14:33.145 "seek_data": true, 00:14:33.145 "seek_hole": true, 00:14:33.145 "unmap": true, 00:14:33.145 "write": true, 00:14:33.145 "write_zeroes": true, 00:14:33.145 "zcopy": false, 00:14:33.145 "zone_append": false, 00:14:33.145 "zone_management": false 00:14:33.145 }, 00:14:33.145 "uuid": "18701281-74fe-46f8-af01-526d5251e011", 00:14:33.145 "zoned": false 00:14:33.145 } 00:14:33.145 ] 00:14:33.145 14:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:33.145 14:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9b3c025-01d0-429b-9e84-3ac8fa3e6f33 00:14:33.145 14:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:33.403 14:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:33.403 14:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9b3c025-01d0-429b-9e84-3ac8fa3e6f33 00:14:33.403 14:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:33.660 14:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:33.660 14:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 18701281-74fe-46f8-af01-526d5251e011 00:14:33.918 14:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d9b3c025-01d0-429b-9e84-3ac8fa3e6f33 00:14:34.176 14:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:34.434 14:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:35.019 00:14:35.019 real 0m18.030s 00:14:35.019 user 0m17.388s 00:14:35.019 sys 0m1.999s 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:35.019 ************************************ 00:14:35.019 END TEST lvs_grow_clean 00:14:35.019 ************************************ 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:35.019 ************************************ 00:14:35.019 START TEST lvs_grow_dirty 00:14:35.019 ************************************ 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:35.019 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:35.277 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:35.277 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:35.535 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:35.535 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:35.535 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:35.793 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:35.793 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:35.793 14:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fe43ba93-922f-48e2-8a4c-407395fa1604 lvol 150 00:14:36.051 14:32:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=397fb55a-c79e-49a5-b401-d984a3b58f0a 00:14:36.051 14:32:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:36.051 14:32:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:36.309 [2024-07-10 14:32:48.552162] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:36.309 [2024-07-10 14:32:48.552246] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:36.309 true 00:14:36.309 14:32:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:36.309 14:32:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:36.885 14:32:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:36.885 14:32:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:37.144 14:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 397fb55a-c79e-49a5-b401-d984a3b58f0a 00:14:37.403 14:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:37.660 [2024-07-10 14:32:49.724746] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.660 14:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:37.918 14:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=90861 00:14:37.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.918 14:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:37.918 14:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:37.918 14:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 90861 /var/tmp/bdevperf.sock 00:14:37.918 14:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 90861 ']' 00:14:37.918 14:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.918 14:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.918 14:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.918 14:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.918 14:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:37.918 [2024-07-10 14:32:50.042080] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:14:37.918 [2024-07-10 14:32:50.042759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90861 ] 00:14:37.918 [2024-07-10 14:32:50.164943] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:37.918 [2024-07-10 14:32:50.181451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.177 [2024-07-10 14:32:50.218885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.742 14:32:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.742 14:32:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:38.742 14:32:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:39.308 Nvme0n1 00:14:39.308 14:32:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:39.566 [ 00:14:39.566 { 00:14:39.566 "aliases": [ 00:14:39.566 "397fb55a-c79e-49a5-b401-d984a3b58f0a" 00:14:39.566 ], 00:14:39.566 "assigned_rate_limits": { 00:14:39.566 "r_mbytes_per_sec": 0, 00:14:39.566 "rw_ios_per_sec": 0, 00:14:39.566 "rw_mbytes_per_sec": 0, 00:14:39.566 "w_mbytes_per_sec": 0 00:14:39.566 }, 00:14:39.566 "block_size": 4096, 00:14:39.566 "claimed": false, 00:14:39.566 "driver_specific": { 00:14:39.566 "mp_policy": "active_passive", 00:14:39.566 "nvme": [ 00:14:39.566 { 00:14:39.566 "ctrlr_data": { 00:14:39.566 "ana_reporting": false, 00:14:39.566 "cntlid": 1, 00:14:39.566 "firmware_revision": "24.09", 00:14:39.566 "model_number": "SPDK bdev Controller", 00:14:39.566 "multi_ctrlr": true, 00:14:39.566 "oacs": { 00:14:39.566 "firmware": 0, 00:14:39.566 "format": 0, 00:14:39.566 "ns_manage": 0, 00:14:39.566 "security": 0 00:14:39.566 }, 00:14:39.566 "serial_number": "SPDK0", 00:14:39.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:39.566 "vendor_id": "0x8086" 00:14:39.566 }, 00:14:39.566 "ns_data": { 00:14:39.566 "can_share": true, 00:14:39.566 "id": 1 00:14:39.566 }, 00:14:39.566 "trid": { 00:14:39.566 "adrfam": "IPv4", 00:14:39.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:39.566 "traddr": "10.0.0.2", 00:14:39.566 "trsvcid": "4420", 00:14:39.566 "trtype": "TCP" 00:14:39.566 }, 00:14:39.566 "vs": { 00:14:39.566 "nvme_version": "1.3" 00:14:39.566 } 00:14:39.566 } 00:14:39.566 ] 00:14:39.566 }, 00:14:39.566 "memory_domains": [ 00:14:39.566 { 00:14:39.566 "dma_device_id": "system", 00:14:39.566 "dma_device_type": 1 00:14:39.566 } 00:14:39.566 ], 00:14:39.566 "name": "Nvme0n1", 00:14:39.566 "num_blocks": 38912, 00:14:39.566 "product_name": "NVMe disk", 00:14:39.566 "supported_io_types": { 00:14:39.566 "abort": true, 00:14:39.566 "compare": true, 00:14:39.566 "compare_and_write": true, 00:14:39.566 "copy": true, 00:14:39.566 "flush": true, 00:14:39.566 "get_zone_info": false, 00:14:39.566 "nvme_admin": true, 00:14:39.566 "nvme_io": true, 00:14:39.566 "nvme_io_md": false, 00:14:39.566 "nvme_iov_md": false, 00:14:39.566 "read": true, 00:14:39.566 "reset": true, 00:14:39.566 "seek_data": false, 00:14:39.566 "seek_hole": false, 00:14:39.566 "unmap": true, 00:14:39.566 "write": true, 00:14:39.566 "write_zeroes": true, 00:14:39.566 "zcopy": false, 00:14:39.566 "zone_append": false, 00:14:39.566 "zone_management": false 00:14:39.566 }, 00:14:39.566 "uuid": "397fb55a-c79e-49a5-b401-d984a3b58f0a", 00:14:39.566 "zoned": false 00:14:39.566 } 00:14:39.566 ] 00:14:39.566 14:32:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=90909 00:14:39.566 14:32:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:39.566 14:32:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:39.566 Running I/O for 10 seconds... 00:14:40.518 Latency(us) 00:14:40.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.518 Nvme0n1 : 1.00 8200.00 32.03 0.00 0.00 0.00 0.00 0.00 00:14:40.518 =================================================================================================================== 00:14:40.518 Total : 8200.00 32.03 0.00 0.00 0.00 0.00 0.00 00:14:40.518 00:14:41.451 14:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:41.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.451 Nvme0n1 : 2.00 8232.50 32.16 0.00 0.00 0.00 0.00 0.00 00:14:41.451 =================================================================================================================== 00:14:41.451 Total : 8232.50 32.16 0.00 0.00 0.00 0.00 0.00 00:14:41.451 00:14:41.708 true 00:14:41.708 14:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:41.708 14:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:41.965 14:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:41.965 14:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:41.965 14:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 90909 00:14:42.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.530 Nvme0n1 : 3.00 8221.67 32.12 0.00 0.00 0.00 0.00 0.00 00:14:42.530 =================================================================================================================== 00:14:42.530 Total : 8221.67 32.12 0.00 0.00 0.00 0.00 0.00 00:14:42.530 00:14:43.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.463 Nvme0n1 : 4.00 8193.50 32.01 0.00 0.00 0.00 0.00 0.00 00:14:43.463 =================================================================================================================== 00:14:43.463 Total : 8193.50 32.01 0.00 0.00 0.00 0.00 0.00 00:14:43.463 00:14:44.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.837 Nvme0n1 : 5.00 8178.40 31.95 0.00 0.00 0.00 0.00 0.00 00:14:44.837 =================================================================================================================== 00:14:44.837 Total : 8178.40 31.95 0.00 0.00 0.00 0.00 0.00 00:14:44.837 00:14:45.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.773 Nvme0n1 : 6.00 8140.00 31.80 0.00 0.00 0.00 0.00 0.00 00:14:45.773 =================================================================================================================== 00:14:45.773 Total : 8140.00 31.80 0.00 0.00 0.00 0.00 0.00 00:14:45.773 00:14:46.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.709 Nvme0n1 : 7.00 8111.00 31.68 0.00 0.00 0.00 0.00 0.00 00:14:46.709 =================================================================================================================== 00:14:46.709 Total : 8111.00 31.68 0.00 0.00 0.00 0.00 0.00 00:14:46.709 00:14:47.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.643 Nvme0n1 : 8.00 8034.88 31.39 0.00 0.00 0.00 0.00 0.00 00:14:47.643 =================================================================================================================== 00:14:47.643 Total : 8034.88 31.39 0.00 0.00 0.00 0.00 0.00 00:14:47.643 00:14:48.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.577 Nvme0n1 : 9.00 8003.89 31.27 0.00 0.00 0.00 0.00 0.00 00:14:48.577 =================================================================================================================== 00:14:48.577 Total : 8003.89 31.27 0.00 0.00 0.00 0.00 0.00 00:14:48.577 00:14:49.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.513 Nvme0n1 : 10.00 7974.90 31.15 0.00 0.00 0.00 0.00 0.00 00:14:49.513 =================================================================================================================== 00:14:49.513 Total : 7974.90 31.15 0.00 0.00 0.00 0.00 0.00 00:14:49.513 00:14:49.513 00:14:49.513 Latency(us) 00:14:49.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.513 Nvme0n1 : 10.02 7976.04 31.16 0.00 0.00 16042.66 3112.96 50998.92 00:14:49.513 =================================================================================================================== 00:14:49.513 Total : 7976.04 31.16 0.00 0.00 16042.66 3112.96 50998.92 00:14:49.513 0 00:14:49.513 14:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 90861 00:14:49.513 14:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 90861 ']' 00:14:49.513 14:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 90861 00:14:49.513 14:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:49.513 14:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:49.513 14:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90861 00:14:49.513 killing process with pid 90861 00:14:49.513 Received shutdown signal, test time was about 10.000000 seconds 00:14:49.513 00:14:49.513 Latency(us) 00:14:49.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.513 =================================================================================================================== 00:14:49.513 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:49.513 14:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:49.513 14:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:49.513 14:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90861' 00:14:49.513 14:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 90861 00:14:49.513 14:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 90861 00:14:49.771 14:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:50.029 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:50.288 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:50.288 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:50.546 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:50.546 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:50.546 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 90278 00:14:50.546 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 90278 00:14:50.833 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 90278 Killed "${NVMF_APP[@]}" "$@" 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=91072 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 91072 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 91072 ']' 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.833 14:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:50.833 [2024-07-10 14:33:02.914185] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:14:50.833 [2024-07-10 14:33:02.914278] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.833 [2024-07-10 14:33:03.034752] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:50.833 [2024-07-10 14:33:03.055943] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.833 [2024-07-10 14:33:03.096117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.833 [2024-07-10 14:33:03.096171] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.833 [2024-07-10 14:33:03.096184] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.833 [2024-07-10 14:33:03.096194] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.833 [2024-07-10 14:33:03.096202] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.833 [2024-07-10 14:33:03.096236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.112 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.112 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:51.112 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.112 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:51.112 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:51.112 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.112 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:51.370 [2024-07-10 14:33:03.496381] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:51.370 [2024-07-10 14:33:03.496763] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:51.370 [2024-07-10 14:33:03.496887] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:51.370 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:51.370 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 397fb55a-c79e-49a5-b401-d984a3b58f0a 00:14:51.370 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=397fb55a-c79e-49a5-b401-d984a3b58f0a 00:14:51.370 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:51.370 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:51.370 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:51.370 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:51.370 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:51.628 14:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 397fb55a-c79e-49a5-b401-d984a3b58f0a -t 2000 00:14:51.886 [ 00:14:51.886 { 00:14:51.886 "aliases": [ 00:14:51.886 "lvs/lvol" 00:14:51.886 ], 00:14:51.886 "assigned_rate_limits": { 00:14:51.886 "r_mbytes_per_sec": 0, 00:14:51.886 "rw_ios_per_sec": 0, 00:14:51.886 "rw_mbytes_per_sec": 0, 00:14:51.886 "w_mbytes_per_sec": 0 00:14:51.886 }, 00:14:51.886 "block_size": 4096, 00:14:51.886 "claimed": false, 00:14:51.886 "driver_specific": { 00:14:51.886 "lvol": { 00:14:51.886 "base_bdev": "aio_bdev", 00:14:51.886 "clone": false, 00:14:51.886 "esnap_clone": false, 00:14:51.886 "lvol_store_uuid": "fe43ba93-922f-48e2-8a4c-407395fa1604", 00:14:51.886 "num_allocated_clusters": 38, 00:14:51.886 "snapshot": false, 00:14:51.886 "thin_provision": false 00:14:51.886 } 00:14:51.886 }, 00:14:51.886 "name": "397fb55a-c79e-49a5-b401-d984a3b58f0a", 00:14:51.886 "num_blocks": 38912, 00:14:51.886 "product_name": "Logical Volume", 00:14:51.886 "supported_io_types": { 00:14:51.886 "abort": false, 00:14:51.886 "compare": false, 00:14:51.886 "compare_and_write": false, 00:14:51.886 "copy": false, 00:14:51.886 "flush": false, 00:14:51.886 "get_zone_info": false, 00:14:51.886 "nvme_admin": false, 00:14:51.886 "nvme_io": false, 00:14:51.886 "nvme_io_md": false, 00:14:51.886 "nvme_iov_md": false, 00:14:51.886 "read": true, 00:14:51.886 "reset": true, 00:14:51.886 "seek_data": true, 00:14:51.886 "seek_hole": true, 00:14:51.886 "unmap": true, 00:14:51.886 "write": true, 00:14:51.886 "write_zeroes": true, 00:14:51.886 "zcopy": false, 00:14:51.886 "zone_append": false, 00:14:51.886 "zone_management": false 00:14:51.886 }, 00:14:51.886 "uuid": "397fb55a-c79e-49a5-b401-d984a3b58f0a", 00:14:51.886 "zoned": false 00:14:51.886 } 00:14:51.886 ] 00:14:51.886 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:51.886 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:51.886 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:52.145 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:52.145 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:52.145 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:52.403 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:52.403 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:52.661 [2024-07-10 14:33:04.890100] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:52.661 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:52.661 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:52.661 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:52.661 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.661 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.661 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.661 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.661 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.661 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.661 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.661 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:52.661 14:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:52.920 2024/07/10 14:33:05 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:fe43ba93-922f-48e2-8a4c-407395fa1604], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:52.920 request: 00:14:52.920 { 00:14:52.920 "method": "bdev_lvol_get_lvstores", 00:14:52.920 "params": { 00:14:52.921 "uuid": "fe43ba93-922f-48e2-8a4c-407395fa1604" 00:14:52.921 } 00:14:52.921 } 00:14:52.921 Got JSON-RPC error response 00:14:52.921 GoRPCClient: error on JSON-RPC call 00:14:53.179 14:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:53.179 14:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:53.179 14:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:53.179 14:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:53.179 14:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:53.438 aio_bdev 00:14:53.438 14:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 397fb55a-c79e-49a5-b401-d984a3b58f0a 00:14:53.438 14:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=397fb55a-c79e-49a5-b401-d984a3b58f0a 00:14:53.438 14:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:53.438 14:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:53.438 14:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:53.438 14:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:53.438 14:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:53.696 14:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 397fb55a-c79e-49a5-b401-d984a3b58f0a -t 2000 00:14:53.955 [ 00:14:53.955 { 00:14:53.955 "aliases": [ 00:14:53.955 "lvs/lvol" 00:14:53.955 ], 00:14:53.955 "assigned_rate_limits": { 00:14:53.955 "r_mbytes_per_sec": 0, 00:14:53.955 "rw_ios_per_sec": 0, 00:14:53.955 "rw_mbytes_per_sec": 0, 00:14:53.955 "w_mbytes_per_sec": 0 00:14:53.955 }, 00:14:53.955 "block_size": 4096, 00:14:53.955 "claimed": false, 00:14:53.955 "driver_specific": { 00:14:53.955 "lvol": { 00:14:53.955 "base_bdev": "aio_bdev", 00:14:53.955 "clone": false, 00:14:53.955 "esnap_clone": false, 00:14:53.955 "lvol_store_uuid": "fe43ba93-922f-48e2-8a4c-407395fa1604", 00:14:53.955 "num_allocated_clusters": 38, 00:14:53.955 "snapshot": false, 00:14:53.955 "thin_provision": false 00:14:53.955 } 00:14:53.955 }, 00:14:53.955 "name": "397fb55a-c79e-49a5-b401-d984a3b58f0a", 00:14:53.955 "num_blocks": 38912, 00:14:53.955 "product_name": "Logical Volume", 00:14:53.955 "supported_io_types": { 00:14:53.955 "abort": false, 00:14:53.955 "compare": false, 00:14:53.955 "compare_and_write": false, 00:14:53.955 "copy": false, 00:14:53.955 "flush": false, 00:14:53.955 "get_zone_info": false, 00:14:53.955 "nvme_admin": false, 00:14:53.955 "nvme_io": false, 00:14:53.955 "nvme_io_md": false, 00:14:53.955 "nvme_iov_md": false, 00:14:53.955 "read": true, 00:14:53.955 "reset": true, 00:14:53.955 "seek_data": true, 00:14:53.955 "seek_hole": true, 00:14:53.955 "unmap": true, 00:14:53.955 "write": true, 00:14:53.955 "write_zeroes": true, 00:14:53.955 "zcopy": false, 00:14:53.955 "zone_append": false, 00:14:53.955 "zone_management": false 00:14:53.955 }, 00:14:53.955 "uuid": "397fb55a-c79e-49a5-b401-d984a3b58f0a", 00:14:53.955 "zoned": false 00:14:53.955 } 00:14:53.955 ] 00:14:53.955 14:33:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:53.955 14:33:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:53.955 14:33:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:54.213 14:33:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:54.213 14:33:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:54.213 14:33:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:54.471 14:33:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:54.471 14:33:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 397fb55a-c79e-49a5-b401-d984a3b58f0a 00:14:54.730 14:33:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fe43ba93-922f-48e2-8a4c-407395fa1604 00:14:54.988 14:33:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:55.246 14:33:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:55.505 00:14:55.505 real 0m20.642s 00:14:55.505 user 0m43.027s 00:14:55.505 sys 0m7.841s 00:14:55.505 14:33:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:55.505 14:33:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:55.505 ************************************ 00:14:55.505 END TEST lvs_grow_dirty 00:14:55.505 ************************************ 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:55.764 nvmf_trace.0 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:55.764 14:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:55.764 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:55.764 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:55.764 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:55.764 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:55.764 rmmod nvme_tcp 00:14:56.023 rmmod nvme_fabrics 00:14:56.023 rmmod nvme_keyring 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 91072 ']' 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 91072 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 91072 ']' 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 91072 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91072 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:56.023 killing process with pid 91072 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91072' 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 91072 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 91072 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:56.023 00:14:56.023 real 0m40.423s 00:14:56.023 user 1m6.284s 00:14:56.023 sys 0m10.488s 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:56.023 14:33:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:56.023 ************************************ 00:14:56.023 END TEST nvmf_lvs_grow 00:14:56.023 ************************************ 00:14:56.281 14:33:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:56.281 14:33:08 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:56.281 14:33:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:56.281 14:33:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.281 14:33:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:56.281 ************************************ 00:14:56.281 START TEST nvmf_bdev_io_wait 00:14:56.281 ************************************ 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:56.281 * Looking for test storage... 00:14:56.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.281 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:56.282 Cannot find device "nvmf_tgt_br" 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.282 Cannot find device "nvmf_tgt_br2" 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:56.282 Cannot find device "nvmf_tgt_br" 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:56.282 Cannot find device "nvmf_tgt_br2" 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:14:56.282 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:56.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:14:56.549 00:14:56.549 --- 10.0.0.2 ping statistics --- 00:14:56.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.549 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:56.549 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:56.550 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.550 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:56.550 00:14:56.550 --- 10.0.0.3 ping statistics --- 00:14:56.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.550 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:56.550 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:14:56.550 00:14:56.550 --- 10.0.0.1 ping statistics --- 00:14:56.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.550 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:56.550 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.550 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:14:56.550 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.550 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.550 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.550 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.550 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.550 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.550 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.808 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:56.808 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.808 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:56.808 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:56.808 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=91472 00:14:56.808 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 91472 00:14:56.808 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:56.808 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 91472 ']' 00:14:56.808 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.808 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.808 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.808 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.808 14:33:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:56.808 [2024-07-10 14:33:08.906088] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:14:56.808 [2024-07-10 14:33:08.906193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.808 [2024-07-10 14:33:09.029243] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:56.808 [2024-07-10 14:33:09.049140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.808 [2024-07-10 14:33:09.094349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.808 [2024-07-10 14:33:09.094411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.808 [2024-07-10 14:33:09.094425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.808 [2024-07-10 14:33:09.094435] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.808 [2024-07-10 14:33:09.094444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.808 [2024-07-10 14:33:09.094579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.808 [2024-07-10 14:33:09.094647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.808 [2024-07-10 14:33:09.095255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.808 [2024-07-10 14:33:09.095311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.066 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.066 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:57.066 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.067 [2024-07-10 14:33:09.227902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.067 Malloc0 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.067 [2024-07-10 14:33:09.275915] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=91511 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=91513 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:57.067 { 00:14:57.067 "params": { 00:14:57.067 "name": "Nvme$subsystem", 00:14:57.067 "trtype": "$TEST_TRANSPORT", 00:14:57.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:57.067 "adrfam": "ipv4", 00:14:57.067 "trsvcid": "$NVMF_PORT", 00:14:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:57.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:57.067 "hdgst": ${hdgst:-false}, 00:14:57.067 "ddgst": ${ddgst:-false} 00:14:57.067 }, 00:14:57.067 "method": "bdev_nvme_attach_controller" 00:14:57.067 } 00:14:57.067 EOF 00:14:57.067 )") 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=91515 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=91518 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:57.067 { 00:14:57.067 "params": { 00:14:57.067 "name": "Nvme$subsystem", 00:14:57.067 "trtype": "$TEST_TRANSPORT", 00:14:57.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:57.067 "adrfam": "ipv4", 00:14:57.067 "trsvcid": "$NVMF_PORT", 00:14:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:57.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:57.067 "hdgst": ${hdgst:-false}, 00:14:57.067 "ddgst": ${ddgst:-false} 00:14:57.067 }, 00:14:57.067 "method": "bdev_nvme_attach_controller" 00:14:57.067 } 00:14:57.067 EOF 00:14:57.067 )") 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:57.067 { 00:14:57.067 "params": { 00:14:57.067 "name": "Nvme$subsystem", 00:14:57.067 "trtype": "$TEST_TRANSPORT", 00:14:57.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:57.067 "adrfam": "ipv4", 00:14:57.067 "trsvcid": "$NVMF_PORT", 00:14:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:57.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:57.067 "hdgst": ${hdgst:-false}, 00:14:57.067 "ddgst": ${ddgst:-false} 00:14:57.067 }, 00:14:57.067 "method": "bdev_nvme_attach_controller" 00:14:57.067 } 00:14:57.067 EOF 00:14:57.067 )") 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:57.067 "params": { 00:14:57.067 "name": "Nvme1", 00:14:57.067 "trtype": "tcp", 00:14:57.067 "traddr": "10.0.0.2", 00:14:57.067 "adrfam": "ipv4", 00:14:57.067 "trsvcid": "4420", 00:14:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:57.067 "hdgst": false, 00:14:57.067 "ddgst": false 00:14:57.067 }, 00:14:57.067 "method": "bdev_nvme_attach_controller" 00:14:57.067 }' 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:57.067 { 00:14:57.067 "params": { 00:14:57.067 "name": "Nvme$subsystem", 00:14:57.067 "trtype": "$TEST_TRANSPORT", 00:14:57.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:57.067 "adrfam": "ipv4", 00:14:57.067 "trsvcid": "$NVMF_PORT", 00:14:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:57.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:57.067 "hdgst": ${hdgst:-false}, 00:14:57.067 "ddgst": ${ddgst:-false} 00:14:57.067 }, 00:14:57.067 "method": "bdev_nvme_attach_controller" 00:14:57.067 } 00:14:57.067 EOF 00:14:57.067 )") 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:57.067 "params": { 00:14:57.067 "name": "Nvme1", 00:14:57.067 "trtype": "tcp", 00:14:57.067 "traddr": "10.0.0.2", 00:14:57.067 "adrfam": "ipv4", 00:14:57.067 "trsvcid": "4420", 00:14:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:57.067 "hdgst": false, 00:14:57.067 "ddgst": false 00:14:57.067 }, 00:14:57.067 "method": "bdev_nvme_attach_controller" 00:14:57.067 }' 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:57.067 "params": { 00:14:57.067 "name": "Nvme1", 00:14:57.067 "trtype": "tcp", 00:14:57.067 "traddr": "10.0.0.2", 00:14:57.067 "adrfam": "ipv4", 00:14:57.067 "trsvcid": "4420", 00:14:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:57.067 "hdgst": false, 00:14:57.067 "ddgst": false 00:14:57.067 }, 00:14:57.067 "method": "bdev_nvme_attach_controller" 00:14:57.067 }' 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:57.067 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:57.068 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:57.068 "params": { 00:14:57.068 "name": "Nvme1", 00:14:57.068 "trtype": "tcp", 00:14:57.068 "traddr": "10.0.0.2", 00:14:57.068 "adrfam": "ipv4", 00:14:57.068 "trsvcid": "4420", 00:14:57.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:57.068 "hdgst": false, 00:14:57.068 "ddgst": false 00:14:57.068 }, 00:14:57.068 "method": "bdev_nvme_attach_controller" 00:14:57.068 }' 00:14:57.068 14:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 91511 00:14:57.068 [2024-07-10 14:33:09.348507] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:14:57.068 [2024-07-10 14:33:09.348644] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:57.326 [2024-07-10 14:33:09.359011] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:14:57.326 [2024-07-10 14:33:09.359103] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:57.326 [2024-07-10 14:33:09.369864] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:14:57.326 [2024-07-10 14:33:09.369938] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:57.326 [2024-07-10 14:33:09.371137] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:14:57.326 [2024-07-10 14:33:09.371250] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:57.326 [2024-07-10 14:33:09.516913] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:57.326 [2024-07-10 14:33:09.535069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.326 [2024-07-10 14:33:09.554713] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:57.326 [2024-07-10 14:33:09.566324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:57.326 [2024-07-10 14:33:09.576052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.326 [2024-07-10 14:33:09.600699] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:57.326 [2024-07-10 14:33:09.604503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:57.584 [2024-07-10 14:33:09.623008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.584 [2024-07-10 14:33:09.645116] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:57.584 [2024-07-10 14:33:09.659128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:57.584 [2024-07-10 14:33:09.665481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.584 Running I/O for 1 seconds... 00:14:57.584 [2024-07-10 14:33:09.693395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:57.584 Running I/O for 1 seconds... 00:14:57.584 Running I/O for 1 seconds... 00:14:57.584 Running I/O for 1 seconds... 00:14:58.518 00:14:58.518 Latency(us) 00:14:58.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.518 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:58.518 Nvme1n1 : 1.02 5812.16 22.70 0.00 0.00 21785.94 10187.87 41228.10 00:14:58.518 =================================================================================================================== 00:14:58.518 Total : 5812.16 22.70 0.00 0.00 21785.94 10187.87 41228.10 00:14:58.518 00:14:58.518 Latency(us) 00:14:58.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.518 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:58.518 Nvme1n1 : 1.00 177245.89 692.37 0.00 0.00 718.99 294.17 1936.29 00:14:58.518 =================================================================================================================== 00:14:58.518 Total : 177245.89 692.37 0.00 0.00 718.99 294.17 1936.29 00:14:58.775 00:14:58.775 Latency(us) 00:14:58.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.776 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:58.776 Nvme1n1 : 1.01 9539.97 37.27 0.00 0.00 13356.57 6791.91 22401.40 00:14:58.776 =================================================================================================================== 00:14:58.776 Total : 9539.97 37.27 0.00 0.00 13356.57 6791.91 22401.40 00:14:58.776 00:14:58.776 Latency(us) 00:14:58.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.776 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:58.776 Nvme1n1 : 1.01 5809.54 22.69 0.00 0.00 21963.47 5719.51 48615.80 00:14:58.776 =================================================================================================================== 00:14:58.776 Total : 5809.54 22.69 0.00 0.00 21963.47 5719.51 48615.80 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 91513 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 91515 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 91518 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:58.776 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:58.776 rmmod nvme_tcp 00:14:59.034 rmmod nvme_fabrics 00:14:59.034 rmmod nvme_keyring 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 91472 ']' 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 91472 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 91472 ']' 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 91472 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91472 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:59.034 killing process with pid 91472 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91472' 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 91472 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 91472 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:59.034 ************************************ 00:14:59.034 END TEST nvmf_bdev_io_wait 00:14:59.034 ************************************ 00:14:59.034 00:14:59.034 real 0m2.944s 00:14:59.034 user 0m13.155s 00:14:59.034 sys 0m1.686s 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.034 14:33:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:59.292 14:33:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:59.292 14:33:11 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:59.292 14:33:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:59.292 14:33:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.292 14:33:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:59.292 ************************************ 00:14:59.292 START TEST nvmf_queue_depth 00:14:59.292 ************************************ 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:59.292 * Looking for test storage... 00:14:59.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.292 14:33:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:59.293 Cannot find device "nvmf_tgt_br" 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:59.293 Cannot find device "nvmf_tgt_br2" 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:59.293 Cannot find device "nvmf_tgt_br" 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:59.293 Cannot find device "nvmf_tgt_br2" 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:59.293 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:59.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:59.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:59.551 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:59.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:14:59.552 00:14:59.552 --- 10.0.0.2 ping statistics --- 00:14:59.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.552 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:59.552 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:59.552 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:59.552 00:14:59.552 --- 10.0.0.3 ping statistics --- 00:14:59.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.552 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:59.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:59.552 00:14:59.552 --- 10.0.0.1 ping statistics --- 00:14:59.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.552 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=91722 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 91722 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 91722 ']' 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.552 14:33:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:59.810 [2024-07-10 14:33:11.855770] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:14:59.810 [2024-07-10 14:33:11.855860] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.810 [2024-07-10 14:33:11.977931] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:59.810 [2024-07-10 14:33:11.994002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.810 [2024-07-10 14:33:12.030102] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.810 [2024-07-10 14:33:12.030148] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.810 [2024-07-10 14:33:12.030159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.810 [2024-07-10 14:33:12.030168] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.810 [2024-07-10 14:33:12.030175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.810 [2024-07-10 14:33:12.030200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:00.069 [2024-07-10 14:33:12.151925] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:00.069 Malloc0 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:00.069 [2024-07-10 14:33:12.213269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=91753 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 91753 /var/tmp/bdevperf.sock 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 91753 ']' 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.069 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:00.069 [2024-07-10 14:33:12.274649] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:15:00.069 [2024-07-10 14:33:12.274739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91753 ] 00:15:00.328 [2024-07-10 14:33:12.397445] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:00.328 [2024-07-10 14:33:12.421723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.328 [2024-07-10 14:33:12.463909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.328 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.328 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:00.328 14:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:00.328 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.328 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:00.586 NVMe0n1 00:15:00.586 14:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.586 14:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.586 Running I/O for 10 seconds... 00:15:10.563 00:15:10.563 Latency(us) 00:15:10.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.563 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:10.563 Verification LBA range: start 0x0 length 0x4000 00:15:10.563 NVMe0n1 : 10.08 8587.38 33.54 0.00 0.00 118627.37 28359.21 87699.08 00:15:10.563 =================================================================================================================== 00:15:10.563 Total : 8587.38 33.54 0.00 0.00 118627.37 28359.21 87699.08 00:15:10.563 0 00:15:10.563 14:33:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 91753 00:15:10.563 14:33:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 91753 ']' 00:15:10.563 14:33:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 91753 00:15:10.563 14:33:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:10.563 14:33:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:10.563 14:33:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91753 00:15:10.563 14:33:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:10.563 14:33:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:10.563 14:33:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91753' 00:15:10.563 killing process with pid 91753 00:15:10.563 14:33:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 91753 00:15:10.563 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.563 00:15:10.563 Latency(us) 00:15:10.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.563 =================================================================================================================== 00:15:10.563 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.563 14:33:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 91753 00:15:10.825 14:33:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:10.825 14:33:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:10.825 14:33:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:10.825 14:33:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:10.825 rmmod nvme_tcp 00:15:10.825 rmmod nvme_fabrics 00:15:10.825 rmmod nvme_keyring 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 91722 ']' 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 91722 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 91722 ']' 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 91722 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91722 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:10.825 killing process with pid 91722 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91722' 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 91722 00:15:10.825 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 91722 00:15:11.085 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:11.085 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:11.085 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:11.085 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.085 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:11.085 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.085 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.085 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.085 14:33:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:11.085 00:15:11.085 real 0m11.932s 00:15:11.085 user 0m20.762s 00:15:11.085 sys 0m1.780s 00:15:11.085 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:11.085 14:33:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:11.085 ************************************ 00:15:11.085 END TEST nvmf_queue_depth 00:15:11.085 ************************************ 00:15:11.085 14:33:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:11.085 14:33:23 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:11.085 14:33:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:11.085 14:33:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.085 14:33:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:11.085 ************************************ 00:15:11.085 START TEST nvmf_target_multipath 00:15:11.085 ************************************ 00:15:11.085 14:33:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:11.343 * Looking for test storage... 00:15:11.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.343 14:33:23 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:11.344 Cannot find device "nvmf_tgt_br" 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.344 Cannot find device "nvmf_tgt_br2" 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:11.344 Cannot find device "nvmf_tgt_br" 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:11.344 Cannot find device "nvmf_tgt_br2" 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:11.344 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:11.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:11.602 00:15:11.602 --- 10.0.0.2 ping statistics --- 00:15:11.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.602 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:11.602 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:11.602 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:15:11.602 00:15:11.602 --- 10.0.0.3 ping statistics --- 00:15:11.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.602 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:11.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:11.602 00:15:11.602 --- 10.0.0.1 ping statistics --- 00:15:11.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.602 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=92062 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 92062 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 92062 ']' 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.602 14:33:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:11.602 [2024-07-10 14:33:23.860554] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:15:11.603 [2024-07-10 14:33:23.860687] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.861 [2024-07-10 14:33:23.983657] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:11.861 [2024-07-10 14:33:23.997303] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.861 [2024-07-10 14:33:24.038907] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.861 [2024-07-10 14:33:24.038972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.861 [2024-07-10 14:33:24.038990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.861 [2024-07-10 14:33:24.039004] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.861 [2024-07-10 14:33:24.039016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.861 [2024-07-10 14:33:24.039343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.861 [2024-07-10 14:33:24.039462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.861 [2024-07-10 14:33:24.039929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.861 [2024-07-10 14:33:24.039944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.795 14:33:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.795 14:33:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:15:12.795 14:33:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.795 14:33:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:12.795 14:33:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:12.795 14:33:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.795 14:33:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:12.795 [2024-07-10 14:33:25.062454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.053 14:33:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:13.053 Malloc0 00:15:13.310 14:33:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:13.310 14:33:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:13.875 14:33:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.875 [2024-07-10 14:33:26.098598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.875 14:33:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:14.132 [2024-07-10 14:33:26.334799] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:14.132 14:33:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:14.390 14:33:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:14.647 14:33:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:14.648 14:33:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:15:14.648 14:33:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:14.648 14:33:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:14.648 14:33:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=92205 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:16.546 14:33:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:15:16.546 [global] 00:15:16.546 thread=1 00:15:16.546 invalidate=1 00:15:16.546 rw=randrw 00:15:16.546 time_based=1 00:15:16.546 runtime=6 00:15:16.546 ioengine=libaio 00:15:16.546 direct=1 00:15:16.546 bs=4096 00:15:16.546 iodepth=128 00:15:16.546 norandommap=0 00:15:16.546 numjobs=1 00:15:16.546 00:15:16.546 verify_dump=1 00:15:16.546 verify_backlog=512 00:15:16.546 verify_state_save=0 00:15:16.546 do_verify=1 00:15:16.546 verify=crc32c-intel 00:15:16.546 [job0] 00:15:16.546 filename=/dev/nvme0n1 00:15:16.546 Could not set queue depth (nvme0n1) 00:15:16.804 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:16.804 fio-3.35 00:15:16.804 Starting 1 thread 00:15:17.739 14:33:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:17.997 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:18.257 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:18.257 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:18.257 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:18.257 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:18.257 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:18.257 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:18.257 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:18.257 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:18.257 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:18.257 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:18.257 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:18.257 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:18.257 14:33:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:19.198 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:19.198 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:19.198 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:19.198 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:19.455 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:19.714 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:19.714 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:19.714 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:19.714 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:19.714 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:19.714 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:19.714 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:19.714 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:19.714 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:19.714 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:19.714 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:19.714 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:19.714 14:33:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:21.089 14:33:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:21.089 14:33:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:21.089 14:33:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:21.089 14:33:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 92205 00:15:22.988 00:15:22.988 job0: (groupid=0, jobs=1): err= 0: pid=92226: Wed Jul 10 14:33:35 2024 00:15:22.988 read: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(244MiB/6005msec) 00:15:22.988 slat (usec): min=3, max=9803, avg=56.13, stdev=256.20 00:15:22.988 clat (usec): min=841, max=25026, avg=8470.26, stdev=1682.09 00:15:22.988 lat (usec): min=909, max=25039, avg=8526.39, stdev=1694.85 00:15:22.988 clat percentiles (usec): 00:15:22.988 | 1.00th=[ 5080], 5.00th=[ 6390], 10.00th=[ 7046], 20.00th=[ 7504], 00:15:22.988 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8455], 00:15:22.988 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[10290], 95.00th=[11600], 00:15:22.988 | 99.00th=[13960], 99.50th=[15664], 99.90th=[21627], 99.95th=[23200], 00:15:22.988 | 99.99th=[23725] 00:15:22.988 bw ( KiB/s): min= 8408, max=29920, per=50.45%, avg=20988.36, stdev=5664.43, samples=11 00:15:22.988 iops : min= 2102, max= 7480, avg=5247.09, stdev=1416.11, samples=11 00:15:22.988 write: IOPS=5871, BW=22.9MiB/s (24.1MB/s)(126MiB/5492msec); 0 zone resets 00:15:22.988 slat (usec): min=13, max=3110, avg=65.83, stdev=156.04 00:15:22.988 clat (usec): min=808, max=24867, avg=7219.36, stdev=1410.22 00:15:22.988 lat (usec): min=882, max=24902, avg=7285.19, stdev=1415.88 00:15:22.988 clat percentiles (usec): 00:15:22.988 | 1.00th=[ 3949], 5.00th=[ 5080], 10.00th=[ 5997], 20.00th=[ 6456], 00:15:22.988 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7373], 00:15:22.988 | 70.00th=[ 7570], 80.00th=[ 7832], 90.00th=[ 8455], 95.00th=[ 9241], 00:15:22.988 | 99.00th=[12125], 99.50th=[14484], 99.90th=[18220], 99.95th=[18482], 00:15:22.988 | 99.99th=[21627] 00:15:22.988 bw ( KiB/s): min= 8944, max=29048, per=89.65%, avg=21057.45, stdev=5401.18, samples=11 00:15:22.988 iops : min= 2236, max= 7262, avg=5264.36, stdev=1350.29, samples=11 00:15:22.988 lat (usec) : 1000=0.01% 00:15:22.988 lat (msec) : 2=0.03%, 4=0.51%, 10=90.60%, 20=8.75%, 50=0.10% 00:15:22.988 cpu : usr=5.73%, sys=24.26%, ctx=6227, majf=0, minf=96 00:15:22.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:22.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:22.988 issued rwts: total=62450,32248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:22.988 00:15:22.988 Run status group 0 (all jobs): 00:15:22.988 READ: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=244MiB (256MB), run=6005-6005msec 00:15:22.988 WRITE: bw=22.9MiB/s (24.1MB/s), 22.9MiB/s-22.9MiB/s (24.1MB/s-24.1MB/s), io=126MiB (132MB), run=5492-5492msec 00:15:22.988 00:15:22.988 Disk stats (read/write): 00:15:22.988 nvme0n1: ios=61554/31619, merge=0/0, ticks=486904/211780, in_queue=698684, util=98.63% 00:15:22.988 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:23.246 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:23.504 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:23.504 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:23.504 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:23.504 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:23.504 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:23.504 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:23.504 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:23.504 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:23.504 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:23.504 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:23.504 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:23.504 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:23.504 14:33:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:24.876 14:33:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:24.876 14:33:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:24.876 14:33:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:24.876 14:33:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:15:24.876 14:33:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=92361 00:15:24.876 14:33:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:15:24.876 14:33:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:24.876 [global] 00:15:24.876 thread=1 00:15:24.876 invalidate=1 00:15:24.876 rw=randrw 00:15:24.876 time_based=1 00:15:24.876 runtime=6 00:15:24.876 ioengine=libaio 00:15:24.876 direct=1 00:15:24.876 bs=4096 00:15:24.876 iodepth=128 00:15:24.876 norandommap=0 00:15:24.876 numjobs=1 00:15:24.876 00:15:24.876 verify_dump=1 00:15:24.876 verify_backlog=512 00:15:24.876 verify_state_save=0 00:15:24.876 do_verify=1 00:15:24.876 verify=crc32c-intel 00:15:24.876 [job0] 00:15:24.876 filename=/dev/nvme0n1 00:15:24.876 Could not set queue depth (nvme0n1) 00:15:24.876 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.876 fio-3.35 00:15:24.876 Starting 1 thread 00:15:25.810 14:33:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:25.810 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:26.068 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:26.068 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:26.068 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:26.068 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:26.068 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:26.068 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:26.068 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:26.068 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:26.068 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:26.068 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:26.068 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:26.068 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:26.068 14:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:27.444 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:27.444 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:27.445 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:27.445 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:27.445 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:27.703 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:27.703 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:27.703 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:27.703 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:27.703 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:27.703 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:27.703 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:27.703 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:27.703 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:27.703 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:27.703 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:27.703 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:27.703 14:33:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:29.076 14:33:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:29.076 14:33:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:29.076 14:33:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:29.076 14:33:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 92361 00:15:30.974 00:15:30.974 job0: (groupid=0, jobs=1): err= 0: pid=92382: Wed Jul 10 14:33:43 2024 00:15:30.974 read: IOPS=10.9k, BW=42.8MiB/s (44.8MB/s)(257MiB/6003msec) 00:15:30.974 slat (usec): min=3, max=9183, avg=46.33, stdev=239.55 00:15:30.974 clat (usec): min=181, max=27883, avg=8016.95, stdev=2945.21 00:15:30.974 lat (usec): min=207, max=27933, avg=8063.28, stdev=2961.61 00:15:30.974 clat percentiles (usec): 00:15:30.974 | 1.00th=[ 865], 5.00th=[ 2212], 10.00th=[ 4293], 20.00th=[ 6194], 00:15:30.974 | 30.00th=[ 7308], 40.00th=[ 7635], 50.00th=[ 8029], 60.00th=[ 8455], 00:15:30.974 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[11338], 95.00th=[12911], 00:15:30.974 | 99.00th=[16057], 99.50th=[17695], 99.90th=[22152], 99.95th=[23462], 00:15:30.974 | 99.99th=[24249] 00:15:30.974 bw ( KiB/s): min=14904, max=32952, per=53.16%, avg=23274.91, stdev=6721.82, samples=11 00:15:30.975 iops : min= 3726, max= 8238, avg=5818.73, stdev=1680.46, samples=11 00:15:30.975 write: IOPS=6381, BW=24.9MiB/s (26.1MB/s)(138MiB/5533msec); 0 zone resets 00:15:30.975 slat (usec): min=13, max=2545, avg=59.29, stdev=139.55 00:15:30.975 clat (usec): min=124, max=22331, avg=6662.52, stdev=2524.66 00:15:30.975 lat (usec): min=180, max=22368, avg=6721.81, stdev=2534.90 00:15:30.975 clat percentiles (usec): 00:15:30.975 | 1.00th=[ 693], 5.00th=[ 1614], 10.00th=[ 3359], 20.00th=[ 4817], 00:15:30.975 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 6915], 60.00th=[ 7242], 00:15:30.975 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 9372], 95.00th=[10683], 00:15:30.975 | 99.00th=[13829], 99.50th=[15008], 99.90th=[19006], 99.95th=[19792], 00:15:30.975 | 99.99th=[21627] 00:15:30.975 bw ( KiB/s): min=14928, max=32632, per=91.21%, avg=23283.64, stdev=6550.30, samples=11 00:15:30.975 iops : min= 3732, max= 8158, avg=5820.91, stdev=1637.58, samples=11 00:15:30.975 lat (usec) : 250=0.02%, 500=0.26%, 750=0.58%, 1000=0.94% 00:15:30.975 lat (msec) : 2=3.17%, 4=5.74%, 10=75.42%, 20=13.73%, 50=0.14% 00:15:30.975 cpu : usr=6.08%, sys=26.89%, ctx=7885, majf=0, minf=84 00:15:30.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:30.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.975 issued rwts: total=65708,35310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.975 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.975 00:15:30.975 Run status group 0 (all jobs): 00:15:30.975 READ: bw=42.8MiB/s (44.8MB/s), 42.8MiB/s-42.8MiB/s (44.8MB/s-44.8MB/s), io=257MiB (269MB), run=6003-6003msec 00:15:30.975 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=138MiB (145MB), run=5533-5533msec 00:15:30.975 00:15:30.975 Disk stats (read/write): 00:15:30.975 nvme0n1: ios=64973/34512, merge=0/0, ticks=487475/212232, in_queue=699707, util=98.57% 00:15:30.975 14:33:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:30.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:30.975 14:33:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:30.975 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:15:30.975 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:30.975 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.975 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:30.975 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.975 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:15:30.975 14:33:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:31.232 14:33:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:31.232 14:33:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:31.232 14:33:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:31.232 14:33:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:15:31.232 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:31.232 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:31.232 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:31.232 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:31.232 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:31.232 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:31.232 rmmod nvme_tcp 00:15:31.232 rmmod nvme_fabrics 00:15:31.490 rmmod nvme_keyring 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 92062 ']' 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 92062 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 92062 ']' 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 92062 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92062 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:31.490 killing process with pid 92062 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92062' 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 92062 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 92062 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:31.490 ************************************ 00:15:31.490 END TEST nvmf_target_multipath 00:15:31.490 ************************************ 00:15:31.490 00:15:31.490 real 0m20.429s 00:15:31.490 user 1m20.574s 00:15:31.490 sys 0m6.718s 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:31.490 14:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:31.749 14:33:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:31.749 14:33:43 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:31.749 14:33:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:31.749 14:33:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.749 14:33:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:31.749 ************************************ 00:15:31.749 START TEST nvmf_zcopy 00:15:31.749 ************************************ 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:31.749 * Looking for test storage... 00:15:31.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.749 14:33:43 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:31.750 Cannot find device "nvmf_tgt_br" 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.750 Cannot find device "nvmf_tgt_br2" 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:31.750 Cannot find device "nvmf_tgt_br" 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:31.750 Cannot find device "nvmf_tgt_br2" 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:15:31.750 14:33:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:32.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:32.008 00:15:32.008 --- 10.0.0.2 ping statistics --- 00:15:32.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.008 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:32.008 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:32.008 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:32.008 00:15:32.008 --- 10.0.0.3 ping statistics --- 00:15:32.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.008 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:32.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:32.008 00:15:32.008 --- 10.0.0.1 ping statistics --- 00:15:32.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.008 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=92654 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 92654 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 92654 ']' 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:32.008 14:33:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:32.267 [2024-07-10 14:33:44.338611] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:15:32.267 [2024-07-10 14:33:44.338719] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.267 [2024-07-10 14:33:44.461478] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:32.267 [2024-07-10 14:33:44.477425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.267 [2024-07-10 14:33:44.512979] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.267 [2024-07-10 14:33:44.513034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.267 [2024-07-10 14:33:44.513045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.267 [2024-07-10 14:33:44.513054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.267 [2024-07-10 14:33:44.513061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.267 [2024-07-10 14:33:44.513083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.202 [2024-07-10 14:33:45.357480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.202 [2024-07-10 14:33:45.373576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.202 malloc0 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:33.202 { 00:15:33.202 "params": { 00:15:33.202 "name": "Nvme$subsystem", 00:15:33.202 "trtype": "$TEST_TRANSPORT", 00:15:33.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:33.202 "adrfam": "ipv4", 00:15:33.202 "trsvcid": "$NVMF_PORT", 00:15:33.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:33.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:33.202 "hdgst": ${hdgst:-false}, 00:15:33.202 "ddgst": ${ddgst:-false} 00:15:33.202 }, 00:15:33.202 "method": "bdev_nvme_attach_controller" 00:15:33.202 } 00:15:33.202 EOF 00:15:33.202 )") 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:33.202 14:33:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:33.202 "params": { 00:15:33.202 "name": "Nvme1", 00:15:33.202 "trtype": "tcp", 00:15:33.202 "traddr": "10.0.0.2", 00:15:33.202 "adrfam": "ipv4", 00:15:33.202 "trsvcid": "4420", 00:15:33.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:33.202 "hdgst": false, 00:15:33.202 "ddgst": false 00:15:33.202 }, 00:15:33.202 "method": "bdev_nvme_attach_controller" 00:15:33.202 }' 00:15:33.202 [2024-07-10 14:33:45.473070] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:15:33.202 [2024-07-10 14:33:45.473174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92711 ] 00:15:33.460 [2024-07-10 14:33:45.596311] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:33.460 [2024-07-10 14:33:45.613865] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.460 [2024-07-10 14:33:45.653253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.718 Running I/O for 10 seconds... 00:15:43.689 00:15:43.689 Latency(us) 00:15:43.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.689 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:43.689 Verification LBA range: start 0x0 length 0x1000 00:15:43.689 Nvme1n1 : 10.02 5674.67 44.33 0.00 0.00 22481.95 1742.66 41704.73 00:15:43.689 =================================================================================================================== 00:15:43.689 Total : 5674.67 44.33 0.00 0.00 22481.95 1742.66 41704.73 00:15:43.689 14:33:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=92822 00:15:43.689 14:33:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:43.689 14:33:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:43.689 14:33:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:43.689 14:33:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:43.689 14:33:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:43.689 14:33:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:43.689 14:33:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:43.689 14:33:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:43.689 { 00:15:43.689 "params": { 00:15:43.689 "name": "Nvme$subsystem", 00:15:43.689 "trtype": "$TEST_TRANSPORT", 00:15:43.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:43.689 "adrfam": "ipv4", 00:15:43.689 "trsvcid": "$NVMF_PORT", 00:15:43.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:43.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:43.689 "hdgst": ${hdgst:-false}, 00:15:43.689 "ddgst": ${ddgst:-false} 00:15:43.689 }, 00:15:43.689 "method": "bdev_nvme_attach_controller" 00:15:43.689 } 00:15:43.689 EOF 00:15:43.689 )") 00:15:43.689 14:33:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:43.689 [2024-07-10 14:33:55.958634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.689 [2024-07-10 14:33:55.958680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.689 14:33:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:43.689 2024/07/10 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.689 14:33:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:43.689 14:33:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:43.689 "params": { 00:15:43.689 "name": "Nvme1", 00:15:43.689 "trtype": "tcp", 00:15:43.689 "traddr": "10.0.0.2", 00:15:43.689 "adrfam": "ipv4", 00:15:43.689 "trsvcid": "4420", 00:15:43.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:43.689 "hdgst": false, 00:15:43.689 "ddgst": false 00:15:43.689 }, 00:15:43.689 "method": "bdev_nvme_attach_controller" 00:15:43.689 }' 00:15:43.689 [2024-07-10 14:33:55.970648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.689 [2024-07-10 14:33:55.970695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.689 2024/07/10 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.689 [2024-07-10 14:33:55.978615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.689 [2024-07-10 14:33:55.978651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.948 2024/07/10 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.948 [2024-07-10 14:33:55.990611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.948 [2024-07-10 14:33:55.990643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.948 2024/07/10 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.948 [2024-07-10 14:33:56.002615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.948 [2024-07-10 14:33:56.002648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.948 [2024-07-10 14:33:56.005405] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:15:43.948 [2024-07-10 14:33:56.005493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92822 ] 00:15:43.948 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.948 [2024-07-10 14:33:56.014618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.948 [2024-07-10 14:33:56.014652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.948 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.948 [2024-07-10 14:33:56.026628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.948 [2024-07-10 14:33:56.026661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.038621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.038652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.046627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.046658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.054620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.054653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.066626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.066656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.074632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.074661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.082619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.082650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.090622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.090652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.098640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.098671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.106634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.106664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.114632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.114663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.122648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.122677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.127028] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:43.949 [2024-07-10 14:33:56.134649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.134678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.141728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.949 [2024-07-10 14:33:56.146677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.146712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.158690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.158731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.170692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.170732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.178678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.178718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.185449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.949 [2024-07-10 14:33:56.190666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.190699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.202704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.202746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.214721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.214764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.949 [2024-07-10 14:33:56.222695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.949 [2024-07-10 14:33:56.222733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.949 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.950 [2024-07-10 14:33:56.230710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.950 [2024-07-10 14:33:56.230750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.950 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.238697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.238733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.246682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.246711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.254692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.254724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.262705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.262739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.270694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.270725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.278689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.278725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.286701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.286735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.294708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.294743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.302707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.302740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.310704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.310737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.318718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.318756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 Running I/O for 5 seconds... 00:15:44.210 [2024-07-10 14:33:56.330730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.330770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.346298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.346330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.360307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.360352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.378024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.378062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.393155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.393196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.402992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.403031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.413351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.413388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.424242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.424294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.434897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.434937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.446674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.446714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.210 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.210 [2024-07-10 14:33:56.457947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.210 [2024-07-10 14:33:56.457985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.211 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.211 [2024-07-10 14:33:56.470781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.211 [2024-07-10 14:33:56.470821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.211 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.211 [2024-07-10 14:33:56.488185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.211 [2024-07-10 14:33:56.488231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.211 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.504488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.504532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.515328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.515363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.526316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.526348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.542394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.542436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.558104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.558144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.568014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.568058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.579724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.579762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.596208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.596246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.606218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.606262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.622354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.622401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.637634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.637677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.648512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.648552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.659782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.659822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.672633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.672674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.682468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.682505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.694301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.694336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.709013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.709051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.724852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.724892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.735219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.735255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.746358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.746395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.470 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.470 [2024-07-10 14:33:56.759417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.470 [2024-07-10 14:33:56.759455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.776596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.776654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.792627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.792679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.810140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.810182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.820713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.820751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.835263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.835315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.851213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.851253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.868276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.868331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.878748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.878784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.889713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.889751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.900741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.900788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.911448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.911482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.922210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.922246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.936754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.936797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.729 [2024-07-10 14:33:56.953131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.729 [2024-07-10 14:33:56.953199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.729 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.730 [2024-07-10 14:33:56.971865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.730 [2024-07-10 14:33:56.971934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.730 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.730 [2024-07-10 14:33:56.987266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.730 [2024-07-10 14:33:56.987336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.730 2024/07/10 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.730 [2024-07-10 14:33:57.006269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.730 [2024-07-10 14:33:57.006350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.730 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.020875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.020922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.036704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.036765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.048777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.048820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.067467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.067524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.082075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.082113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.098460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.098503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.108975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.109013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.119847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.119883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.132558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.132594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.149904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.149944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.165642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.165681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.181431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.181483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.191721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.191757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.202584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.202620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.213863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.213903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.988 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.988 [2024-07-10 14:33:57.228991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.988 [2024-07-10 14:33:57.229040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.989 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.989 [2024-07-10 14:33:57.245654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.989 [2024-07-10 14:33:57.245703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.989 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.989 [2024-07-10 14:33:57.256197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.989 [2024-07-10 14:33:57.256239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.989 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.989 [2024-07-10 14:33:57.267120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.989 [2024-07-10 14:33:57.267163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.989 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.247 [2024-07-10 14:33:57.279570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.247 [2024-07-10 14:33:57.279609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.247 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.289856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.289893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.300561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.300609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.311713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.311753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.323012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.323052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.340049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.340086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.357024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.357060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.367392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.367428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.377896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.377929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.388439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.388476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.399047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.399082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.413276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.413323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.423100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.423135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.437631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.437669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.448161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.448204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.463101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.463155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.479465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.479514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.489065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.489108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.500616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.500664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.517888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.517943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.248 [2024-07-10 14:33:57.532689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.248 [2024-07-10 14:33:57.532744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.248 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.508 [2024-07-10 14:33:57.543079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.508 [2024-07-10 14:33:57.543123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.508 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.508 [2024-07-10 14:33:57.554057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.508 [2024-07-10 14:33:57.554096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.508 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.508 [2024-07-10 14:33:57.570762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.508 [2024-07-10 14:33:57.570811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.508 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.508 [2024-07-10 14:33:57.580333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.508 [2024-07-10 14:33:57.580373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.508 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.508 [2024-07-10 14:33:57.596088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.508 [2024-07-10 14:33:57.596127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.508 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.508 [2024-07-10 14:33:57.613276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.508 [2024-07-10 14:33:57.613323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.508 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.508 [2024-07-10 14:33:57.628728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.508 [2024-07-10 14:33:57.628778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.508 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.508 [2024-07-10 14:33:57.639542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.508 [2024-07-10 14:33:57.639579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.508 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.508 [2024-07-10 14:33:57.650141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.509 [2024-07-10 14:33:57.650176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.509 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.509 [2024-07-10 14:33:57.660953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.509 [2024-07-10 14:33:57.660988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.509 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.509 [2024-07-10 14:33:57.671450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.509 [2024-07-10 14:33:57.671487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.509 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.509 [2024-07-10 14:33:57.682382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.509 [2024-07-10 14:33:57.682425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.509 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.509 [2024-07-10 14:33:57.693407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.509 [2024-07-10 14:33:57.693449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.509 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.509 [2024-07-10 14:33:57.708866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.509 [2024-07-10 14:33:57.708907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.509 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.509 [2024-07-10 14:33:57.724743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.509 [2024-07-10 14:33:57.724787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.509 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.509 [2024-07-10 14:33:57.735534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.509 [2024-07-10 14:33:57.735574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.509 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.509 [2024-07-10 14:33:57.750376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.509 [2024-07-10 14:33:57.750428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.509 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.509 [2024-07-10 14:33:57.760366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.509 [2024-07-10 14:33:57.760403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.509 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.509 [2024-07-10 14:33:57.772223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.509 [2024-07-10 14:33:57.772264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.509 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.509 [2024-07-10 14:33:57.783142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.509 [2024-07-10 14:33:57.783181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.509 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.509 [2024-07-10 14:33:57.795693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.509 [2024-07-10 14:33:57.795730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.768 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.768 [2024-07-10 14:33:57.805715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.768 [2024-07-10 14:33:57.805750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.768 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.768 [2024-07-10 14:33:57.817437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.817472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:57.828313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.828345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:57.839717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.839754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:57.852685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.852720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:57.862906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.862939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:57.873706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.873742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:57.886550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.886590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:57.897200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.897237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:57.911857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.911910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:57.928721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.928777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:57.944527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.944571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:57.953845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.953885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:57.969583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.969642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:57.986642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:57.986687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:58.002335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:58.002385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:58.012264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:58.012312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:58.023427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:58.023461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:58.036212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:58.036253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.769 [2024-07-10 14:33:58.051552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.769 [2024-07-10 14:33:58.051597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.769 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.067513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.067561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.084987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.085028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.100748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.100786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.111020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.111055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.125416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.125451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.135662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.135698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.150339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.150374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.167822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.167866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.183373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.183409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.193678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.193714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.208801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.208837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.219326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.219361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.234276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.234328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.250832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.250878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.268411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.268455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.278801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.278837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.289848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.289885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.029 [2024-07-10 14:33:58.302659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.029 [2024-07-10 14:33:58.302696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.029 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.319792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.319839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.336835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.336877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.347219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.347256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.361918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.361958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.372722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.372758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.387370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.387404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.398031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.398067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.413186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.413222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.423815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.423851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.434762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.434797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.447306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.447340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.464186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.464238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.480872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.480912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.496419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.496459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.511804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.511844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.527225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.527266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.289 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.289 [2024-07-10 14:33:58.539358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.289 [2024-07-10 14:33:58.539397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-07-10 14:33:58.557959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-07-10 14:33:58.558005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-07-10 14:33:58.572799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-07-10 14:33:58.572845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.590638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.590678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.606073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.606116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.616717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.616764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.627900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.627934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.640639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.640680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.656804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.656840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.667169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.667205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.677725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.677761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.692426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.692464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.703034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.703071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.713849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.713885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.726567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.726605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.736872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.736908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.747655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.747690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.758299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.758336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.769379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.769414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.780802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.780851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.795687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.795739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.811084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.811127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.821124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.821165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.549 [2024-07-10 14:33:58.832974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.549 [2024-07-10 14:33:58.833022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.549 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.808 [2024-07-10 14:33:58.844296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.808 [2024-07-10 14:33:58.844340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.808 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.808 [2024-07-10 14:33:58.857359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.808 [2024-07-10 14:33:58.857396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.808 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.808 [2024-07-10 14:33:58.867849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.808 [2024-07-10 14:33:58.867887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.808 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.808 [2024-07-10 14:33:58.882866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:58.882908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:58.893069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:58.893108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:58.907174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:58.907214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:58.922509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:58.922551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:58.932354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:58.932389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:58.946810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:58.946850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:58.957133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:58.957170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:58.971909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:58.971950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:58.982512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:58.982552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:58.993465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:58.993500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:59.010844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:59.010882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:59.020985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:59.021022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:59.032183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:59.032219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:59.048188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:59.048225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:59.065423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:59.065460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:59.080832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:59.080868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-07-10 14:33:59.091211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-07-10 14:33:59.091247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.105953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.105994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.118042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.118081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.135741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.135779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.150749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.150786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.160123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.160158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.173644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.173681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.189022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.189060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.199795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.199834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.210627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.210664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.223725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.223761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.239252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.239303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.248704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.248738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.264312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.264348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.274398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.274445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.290524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.290562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.305956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.305992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.316576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.316620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.327651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.327689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-07-10 14:33:59.340835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-07-10 14:33:59.340873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.329 [2024-07-10 14:33:59.358543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.329 [2024-07-10 14:33:59.358585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.329 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.329 [2024-07-10 14:33:59.373818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.329 [2024-07-10 14:33:59.373856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.329 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.329 [2024-07-10 14:33:59.384337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.329 [2024-07-10 14:33:59.384371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.329 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.329 [2024-07-10 14:33:59.398763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.329 [2024-07-10 14:33:59.398801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.329 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.329 [2024-07-10 14:33:59.409272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.329 [2024-07-10 14:33:59.409320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.329 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.330 [2024-07-10 14:33:59.423833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.330 [2024-07-10 14:33:59.423869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.330 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.330 [2024-07-10 14:33:59.434013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.330 [2024-07-10 14:33:59.434047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.330 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.330 [2024-07-10 14:33:59.448572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.330 [2024-07-10 14:33:59.448615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.330 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.330 [2024-07-10 14:33:59.463381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.330 [2024-07-10 14:33:59.463418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.330 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.330 [2024-07-10 14:33:59.480885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.330 [2024-07-10 14:33:59.480922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.330 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.330 [2024-07-10 14:33:59.495914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.330 [2024-07-10 14:33:59.495953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.330 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.330 [2024-07-10 14:33:59.506476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.330 [2024-07-10 14:33:59.506511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.330 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.330 [2024-07-10 14:33:59.521495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.330 [2024-07-10 14:33:59.521532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.330 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.330 [2024-07-10 14:33:59.538001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.330 [2024-07-10 14:33:59.538038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.330 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.330 [2024-07-10 14:33:59.554485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.330 [2024-07-10 14:33:59.554524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.330 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.330 [2024-07-10 14:33:59.572525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.330 [2024-07-10 14:33:59.572568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.330 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.330 [2024-07-10 14:33:59.587563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.330 [2024-07-10 14:33:59.587601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.330 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.330 [2024-07-10 14:33:59.604348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.330 [2024-07-10 14:33:59.604387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.330 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.589 [2024-07-10 14:33:59.619583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.589 [2024-07-10 14:33:59.619621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.589 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.589 [2024-07-10 14:33:59.634931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.589 [2024-07-10 14:33:59.634968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.589 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.589 [2024-07-10 14:33:59.644697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.589 [2024-07-10 14:33:59.644733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.589 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.655950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.655986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.666719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.666756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.677748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.677785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.688483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.688518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.699218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.699255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.713802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.713860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.731126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.731172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.747657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.747714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.765154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.765210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.781607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.781664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.798136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.798187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.809683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.809732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.821167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.821215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.832277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.832338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.845610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.845661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.857268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.857327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.590 [2024-07-10 14:33:59.873197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.590 [2024-07-10 14:33:59.873252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.590 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:33:59.884106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.849 [2024-07-10 14:33:59.884158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.849 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:33:59.895425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.849 [2024-07-10 14:33:59.895482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.849 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:33:59.906527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.849 [2024-07-10 14:33:59.906583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.849 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:33:59.918296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.849 [2024-07-10 14:33:59.918339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.849 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:33:59.929408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.849 [2024-07-10 14:33:59.929457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.849 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:33:59.940552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.849 [2024-07-10 14:33:59.940601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.849 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:33:59.955402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.849 [2024-07-10 14:33:59.955453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.849 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:33:59.966222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.849 [2024-07-10 14:33:59.966265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.849 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:33:59.977441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.849 [2024-07-10 14:33:59.977486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.849 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:33:59.995083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.849 [2024-07-10 14:33:59.995132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.849 2024/07/10 14:33:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:34:00.011057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.849 [2024-07-10 14:34:00.011105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.849 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:34:00.020982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.849 [2024-07-10 14:34:00.021030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.849 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:34:00.032462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.849 [2024-07-10 14:34:00.032508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.849 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.849 [2024-07-10 14:34:00.043359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.850 [2024-07-10 14:34:00.043401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.850 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.850 [2024-07-10 14:34:00.054324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.850 [2024-07-10 14:34:00.054366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.850 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.850 [2024-07-10 14:34:00.065517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.850 [2024-07-10 14:34:00.065562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.850 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.850 [2024-07-10 14:34:00.078640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.850 [2024-07-10 14:34:00.078687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.850 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.850 [2024-07-10 14:34:00.089488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.850 [2024-07-10 14:34:00.089533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.850 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.850 [2024-07-10 14:34:00.100997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.850 [2024-07-10 14:34:00.101043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.850 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.850 [2024-07-10 14:34:00.116514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.850 [2024-07-10 14:34:00.116569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.850 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.850 [2024-07-10 14:34:00.126989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.850 [2024-07-10 14:34:00.127038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.850 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.108 [2024-07-10 14:34:00.141973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.108 [2024-07-10 14:34:00.142026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.108 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.108 [2024-07-10 14:34:00.157834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.108 [2024-07-10 14:34:00.157894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.175320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.175370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.191078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.191126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.201034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.201069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.212465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.212503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.230023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.230073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.245984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.246040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.256781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.256822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.267929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.267972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.284729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.284782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.294546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.294587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.305890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.305932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.317111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.317156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.332220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.332277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.342805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.342853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.354822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.354875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.369638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.369685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.379320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.379362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.109 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.109 [2024-07-10 14:34:00.394895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.109 [2024-07-10 14:34:00.394950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.411230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.411292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.427686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.427736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.437577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.437622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.449470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.449514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.460350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.460394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.471299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.471340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.487471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.487522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.504401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.504449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.520004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.520053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.529357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.529400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.545183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.545232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.560809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.560855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.572318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.572379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.587830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.587880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.604975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.605025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.620047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.620095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.635334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.635400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.368 [2024-07-10 14:34:00.651571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.368 [2024-07-10 14:34:00.651624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.368 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.669364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.669424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.684992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.685036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.701139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.701190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.711808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.711854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.726809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.726859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.736412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.736462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.748616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.748661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.759599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.759644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.770779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.770825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.781885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.781935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.792699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.792746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.803850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.803898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.819091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.819143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.834038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.834088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.844584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.844644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.856520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.856569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.867945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.627 [2024-07-10 14:34:00.867993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.627 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.627 [2024-07-10 14:34:00.880835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.628 [2024-07-10 14:34:00.880884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.628 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.628 [2024-07-10 14:34:00.891832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.628 [2024-07-10 14:34:00.891878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.628 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.628 [2024-07-10 14:34:00.906774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.628 [2024-07-10 14:34:00.906827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.628 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:00.917013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:00.917061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:00.928657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:00.928707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:00.940238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:00.940303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:00.955382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:00.955444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:00.971754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:00.971814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:00.982545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:00.982589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:00.993816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:00.993860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:01.004980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:01.005025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:01.018116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:01.018163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:01.028724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:01.028768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:01.044441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:01.044498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:01.060717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:01.060769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:01.077792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:01.077842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:01.088435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:01.088479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:01.099254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:01.099309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:01.110742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:01.110788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:01.125988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:01.126037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:01.142934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:01.142989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:01.158986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:01.159048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.886 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.886 [2024-07-10 14:34:01.175485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.886 [2024-07-10 14:34:01.175532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.192389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.192449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.209092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.209142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.226025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.226076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.243088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.243140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.258385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.258429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.268708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.268750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.283706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.283754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.299571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.299630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.315125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.315174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.326020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.326067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.336788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.336828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 00:15:49.145 Latency(us) 00:15:49.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.145 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:49.145 Nvme1n1 : 5.01 11471.90 89.62 0.00 0.00 11143.78 4527.94 25261.15 00:15:49.145 =================================================================================================================== 00:15:49.145 Total : 11471.90 89.62 0.00 0.00 11143.78 4527.94 25261.15 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.344764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.344800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.352782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.352825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.360801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.360852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.372838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.372888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.384830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.384885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.396849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.396903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.408826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.408882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.416804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.416845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.145 [2024-07-10 14:34:01.428851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.145 [2024-07-10 14:34:01.428904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.145 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.404 [2024-07-10 14:34:01.436820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.404 [2024-07-10 14:34:01.436866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.404 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.404 [2024-07-10 14:34:01.444814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.404 [2024-07-10 14:34:01.444858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.404 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.404 [2024-07-10 14:34:01.456835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.404 [2024-07-10 14:34:01.456882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.404 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.404 [2024-07-10 14:34:01.468883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.404 [2024-07-10 14:34:01.468944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.404 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.404 [2024-07-10 14:34:01.480835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.404 [2024-07-10 14:34:01.480887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.404 2024/07/10 14:34:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.404 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (92822) - No such process 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 92822 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.404 delay0 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.404 14:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:49.404 [2024-07-10 14:34:01.674998] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:55.967 Initializing NVMe Controllers 00:15:55.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:55.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:55.967 Initialization complete. Launching workers. 00:15:55.967 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 52 00:15:55.967 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 339, failed to submit 33 00:15:55.967 success 130, unsuccess 209, failed 0 00:15:55.967 14:34:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:55.967 14:34:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:55.968 rmmod nvme_tcp 00:15:55.968 rmmod nvme_fabrics 00:15:55.968 rmmod nvme_keyring 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 92654 ']' 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 92654 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 92654 ']' 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 92654 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92654 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:55.968 killing process with pid 92654 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92654' 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 92654 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 92654 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.968 14:34:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.968 14:34:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:55.968 00:15:55.968 real 0m24.214s 00:15:55.968 user 0m39.359s 00:15:55.968 sys 0m6.354s 00:15:55.968 14:34:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:55.968 ************************************ 00:15:55.968 END TEST nvmf_zcopy 00:15:55.968 ************************************ 00:15:55.968 14:34:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:55.968 14:34:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:55.968 14:34:08 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:55.968 14:34:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:55.968 14:34:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.968 14:34:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:55.968 ************************************ 00:15:55.968 START TEST nvmf_nmic 00:15:55.968 ************************************ 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:55.968 * Looking for test storage... 00:15:55.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.968 14:34:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:55.969 Cannot find device "nvmf_tgt_br" 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.969 Cannot find device "nvmf_tgt_br2" 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:55.969 Cannot find device "nvmf_tgt_br" 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:55.969 Cannot find device "nvmf_tgt_br2" 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:15:55.969 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:56.227 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:56.227 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.227 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:15:56.227 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.227 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:15:56.227 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.227 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.227 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.228 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:56.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:15:56.486 00:15:56.486 --- 10.0.0.2 ping statistics --- 00:15:56.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.486 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:56.486 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.486 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:15:56.486 00:15:56.486 --- 10.0.0.3 ping statistics --- 00:15:56.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.486 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:56.486 00:15:56.486 --- 10.0.0.1 ping statistics --- 00:15:56.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.486 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=93135 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 93135 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 93135 ']' 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.486 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.486 [2024-07-10 14:34:08.624401] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:15:56.486 [2024-07-10 14:34:08.624501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.486 [2024-07-10 14:34:08.750353] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:56.486 [2024-07-10 14:34:08.762726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.745 [2024-07-10 14:34:08.799621] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.745 [2024-07-10 14:34:08.799671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.745 [2024-07-10 14:34:08.799682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.745 [2024-07-10 14:34:08.799690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.745 [2024-07-10 14:34:08.799697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.745 [2024-07-10 14:34:08.799805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.745 [2024-07-10 14:34:08.799879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.745 [2024-07-10 14:34:08.800321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.745 [2024-07-10 14:34:08.800335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 [2024-07-10 14:34:08.924697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 Malloc0 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 [2024-07-10 14:34:08.981355] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.745 test case1: single bdev can't be used in multiple subsystems 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.745 14:34:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 [2024-07-10 14:34:09.005197] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:56.745 [2024-07-10 14:34:09.005233] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:56.745 [2024-07-10 14:34:09.005244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.745 2024/07/10 14:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.745 request: 00:15:56.745 { 00:15:56.745 "method": "nvmf_subsystem_add_ns", 00:15:56.745 "params": { 00:15:56.745 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:56.745 "namespace": { 00:15:56.745 "bdev_name": "Malloc0", 00:15:56.745 "no_auto_visible": false 00:15:56.745 } 00:15:56.745 } 00:15:56.745 } 00:15:56.745 Got JSON-RPC error response 00:15:56.745 GoRPCClient: error on JSON-RPC call 00:15:56.745 Adding namespace failed - expected result. 00:15:56.745 test case2: host connect to nvmf target in multiple paths 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 [2024-07-10 14:34:09.017354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.745 14:34:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:57.003 14:34:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:57.267 14:34:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:57.267 14:34:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:57.268 14:34:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.268 14:34:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:57.268 14:34:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:59.165 14:34:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:59.165 14:34:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:59.165 14:34:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:59.165 14:34:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:59.165 14:34:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.165 14:34:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:59.165 14:34:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:59.165 [global] 00:15:59.165 thread=1 00:15:59.165 invalidate=1 00:15:59.165 rw=write 00:15:59.165 time_based=1 00:15:59.165 runtime=1 00:15:59.165 ioengine=libaio 00:15:59.165 direct=1 00:15:59.165 bs=4096 00:15:59.165 iodepth=1 00:15:59.165 norandommap=0 00:15:59.165 numjobs=1 00:15:59.165 00:15:59.165 verify_dump=1 00:15:59.165 verify_backlog=512 00:15:59.165 verify_state_save=0 00:15:59.165 do_verify=1 00:15:59.165 verify=crc32c-intel 00:15:59.165 [job0] 00:15:59.165 filename=/dev/nvme0n1 00:15:59.165 Could not set queue depth (nvme0n1) 00:15:59.422 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:59.422 fio-3.35 00:15:59.422 Starting 1 thread 00:16:00.791 00:16:00.791 job0: (groupid=0, jobs=1): err= 0: pid=93231: Wed Jul 10 14:34:12 2024 00:16:00.791 read: IOPS=3205, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec) 00:16:00.791 slat (nsec): min=13760, max=40860, avg=16539.15, stdev=3170.21 00:16:00.791 clat (usec): min=130, max=446, avg=149.46, stdev=15.75 00:16:00.791 lat (usec): min=144, max=474, avg=166.00, stdev=16.56 00:16:00.791 clat percentiles (usec): 00:16:00.791 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:16:00.791 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:16:00.791 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 159], 95.00th=[ 165], 00:16:00.792 | 99.00th=[ 217], 99.50th=[ 229], 99.90th=[ 347], 99.95th=[ 441], 00:16:00.792 | 99.99th=[ 449] 00:16:00.792 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:00.792 slat (nsec): min=19781, max=82736, avg=23060.22, stdev=4126.57 00:16:00.792 clat (usec): min=87, max=768, avg=103.98, stdev=14.97 00:16:00.792 lat (usec): min=111, max=789, avg=127.04, stdev=15.90 00:16:00.792 clat percentiles (usec): 00:16:00.792 | 1.00th=[ 93], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 98], 00:16:00.792 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 102], 60.00th=[ 104], 00:16:00.792 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 114], 95.00th=[ 118], 00:16:00.792 | 99.00th=[ 127], 99.50th=[ 141], 99.90th=[ 255], 99.95th=[ 314], 00:16:00.792 | 99.99th=[ 766] 00:16:00.792 bw ( KiB/s): min=14920, max=14920, per=100.00%, avg=14920.00, stdev= 0.00, samples=1 00:16:00.792 iops : min= 3730, max= 3730, avg=3730.00, stdev= 0.00, samples=1 00:16:00.792 lat (usec) : 100=17.86%, 250=81.95%, 500=0.18%, 1000=0.01% 00:16:00.792 cpu : usr=2.10%, sys=10.50%, ctx=6806, majf=0, minf=2 00:16:00.792 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.792 issued rwts: total=3209,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.792 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.792 00:16:00.792 Run status group 0 (all jobs): 00:16:00.792 READ: bw=12.5MiB/s (13.1MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=12.5MiB (13.1MB), run=1001-1001msec 00:16:00.792 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:00.792 00:16:00.792 Disk stats (read/write): 00:16:00.792 nvme0n1: ios=3053/3072, merge=0/0, ticks=491/344, in_queue=835, util=91.28% 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:00.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:00.792 rmmod nvme_tcp 00:16:00.792 rmmod nvme_fabrics 00:16:00.792 rmmod nvme_keyring 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 93135 ']' 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 93135 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 93135 ']' 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 93135 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93135 00:16:00.792 killing process with pid 93135 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93135' 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 93135 00:16:00.792 14:34:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 93135 00:16:01.050 14:34:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:01.050 14:34:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:01.050 14:34:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:01.050 14:34:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.050 14:34:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:01.050 14:34:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.050 14:34:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.050 14:34:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.050 14:34:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:01.050 ************************************ 00:16:01.050 END TEST nvmf_nmic 00:16:01.050 ************************************ 00:16:01.050 00:16:01.050 real 0m5.087s 00:16:01.050 user 0m16.622s 00:16:01.050 sys 0m1.281s 00:16:01.050 14:34:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.050 14:34:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:01.050 14:34:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:01.050 14:34:13 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:01.050 14:34:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:01.050 14:34:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.050 14:34:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.050 ************************************ 00:16:01.050 START TEST nvmf_fio_target 00:16:01.050 ************************************ 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:01.050 * Looking for test storage... 00:16:01.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:01.050 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:01.051 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:01.308 Cannot find device "nvmf_tgt_br" 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.308 Cannot find device "nvmf_tgt_br2" 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:01.308 Cannot find device "nvmf_tgt_br" 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:01.308 Cannot find device "nvmf_tgt_br2" 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:01.308 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:01.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:16:01.565 00:16:01.565 --- 10.0.0.2 ping statistics --- 00:16:01.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.565 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:01.565 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:01.565 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:16:01.565 00:16:01.565 --- 10.0.0.3 ping statistics --- 00:16:01.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.565 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:01.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:01.565 00:16:01.565 --- 10.0.0.1 ping statistics --- 00:16:01.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.565 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=93410 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 93410 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:01.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 93410 ']' 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.565 14:34:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.565 [2024-07-10 14:34:13.723406] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:16:01.565 [2024-07-10 14:34:13.723493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.565 [2024-07-10 14:34:13.844343] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:01.822 [2024-07-10 14:34:13.859877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.822 [2024-07-10 14:34:13.903910] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.822 [2024-07-10 14:34:13.904214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.822 [2024-07-10 14:34:13.904476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.822 [2024-07-10 14:34:13.904714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.822 [2024-07-10 14:34:13.904994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.822 [2024-07-10 14:34:13.905272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.822 [2024-07-10 14:34:13.905423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.822 [2024-07-10 14:34:13.905996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.822 [2024-07-10 14:34:13.906019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.386 14:34:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.386 14:34:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:02.386 14:34:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:02.386 14:34:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:02.386 14:34:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.644 14:34:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.644 14:34:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:02.644 [2024-07-10 14:34:14.912607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.911 14:34:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:03.185 14:34:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:03.185 14:34:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:03.443 14:34:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:03.443 14:34:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:03.701 14:34:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:03.701 14:34:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:03.959 14:34:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:03.959 14:34:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:04.217 14:34:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.475 14:34:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:04.475 14:34:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.734 14:34:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:04.734 14:34:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.992 14:34:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:04.992 14:34:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:05.250 14:34:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:05.509 14:34:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:05.509 14:34:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:05.767 14:34:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:05.767 14:34:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:06.025 14:34:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.281 [2024-07-10 14:34:18.495666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.281 14:34:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:06.539 14:34:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:06.796 14:34:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:07.056 14:34:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:07.056 14:34:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:16:07.056 14:34:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.056 14:34:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:16:07.056 14:34:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:16:07.056 14:34:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:16:08.958 14:34:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:08.958 14:34:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:08.958 14:34:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.958 14:34:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:16:08.958 14:34:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.958 14:34:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:16:08.958 14:34:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:08.958 [global] 00:16:08.958 thread=1 00:16:08.958 invalidate=1 00:16:08.958 rw=write 00:16:08.958 time_based=1 00:16:08.958 runtime=1 00:16:08.958 ioengine=libaio 00:16:08.958 direct=1 00:16:08.958 bs=4096 00:16:08.958 iodepth=1 00:16:08.958 norandommap=0 00:16:08.958 numjobs=1 00:16:08.958 00:16:08.958 verify_dump=1 00:16:08.958 verify_backlog=512 00:16:08.958 verify_state_save=0 00:16:08.958 do_verify=1 00:16:08.958 verify=crc32c-intel 00:16:08.958 [job0] 00:16:08.958 filename=/dev/nvme0n1 00:16:08.958 [job1] 00:16:08.958 filename=/dev/nvme0n2 00:16:08.958 [job2] 00:16:08.958 filename=/dev/nvme0n3 00:16:08.958 [job3] 00:16:08.958 filename=/dev/nvme0n4 00:16:09.222 Could not set queue depth (nvme0n1) 00:16:09.222 Could not set queue depth (nvme0n2) 00:16:09.222 Could not set queue depth (nvme0n3) 00:16:09.222 Could not set queue depth (nvme0n4) 00:16:09.222 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.222 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.223 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.223 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.223 fio-3.35 00:16:09.223 Starting 4 threads 00:16:10.605 00:16:10.605 job0: (groupid=0, jobs=1): err= 0: pid=93706: Wed Jul 10 14:34:22 2024 00:16:10.605 read: IOPS=2250, BW=9003KiB/s (9219kB/s)(9012KiB/1001msec) 00:16:10.605 slat (nsec): min=14215, max=56235, avg=18849.55, stdev=4208.11 00:16:10.605 clat (usec): min=139, max=825, avg=201.03, stdev=49.03 00:16:10.605 lat (usec): min=158, max=842, avg=219.88, stdev=50.04 00:16:10.605 clat percentiles (usec): 00:16:10.605 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:16:10.605 | 30.00th=[ 169], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 192], 00:16:10.605 | 70.00th=[ 212], 80.00th=[ 249], 90.00th=[ 273], 95.00th=[ 289], 00:16:10.605 | 99.00th=[ 318], 99.50th=[ 404], 99.90th=[ 453], 99.95th=[ 482], 00:16:10.605 | 99.99th=[ 824] 00:16:10.605 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:10.605 slat (usec): min=20, max=120, avg=29.74, stdev=10.02 00:16:10.605 clat (usec): min=98, max=2544, avg=163.28, stdev=78.36 00:16:10.605 lat (usec): min=123, max=2571, avg=193.02, stdev=82.97 00:16:10.605 clat percentiles (usec): 00:16:10.605 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 115], 20.00th=[ 122], 00:16:10.605 | 30.00th=[ 127], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 151], 00:16:10.605 | 70.00th=[ 161], 80.00th=[ 188], 90.00th=[ 262], 95.00th=[ 293], 00:16:10.605 | 99.00th=[ 375], 99.50th=[ 437], 99.90th=[ 758], 99.95th=[ 922], 00:16:10.605 | 99.99th=[ 2540] 00:16:10.605 bw ( KiB/s): min= 9152, max= 9152, per=24.82%, avg=9152.00, stdev= 0.00, samples=1 00:16:10.605 iops : min= 2288, max= 2288, avg=2288.00, stdev= 0.00, samples=1 00:16:10.605 lat (usec) : 100=0.06%, 250=84.50%, 500=15.31%, 750=0.04%, 1000=0.06% 00:16:10.605 lat (msec) : 4=0.02% 00:16:10.605 cpu : usr=2.40%, sys=8.60%, ctx=4813, majf=0, minf=7 00:16:10.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.605 issued rwts: total=2253,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.605 job1: (groupid=0, jobs=1): err= 0: pid=93707: Wed Jul 10 14:34:22 2024 00:16:10.605 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:10.605 slat (nsec): min=9359, max=64086, avg=20314.09, stdev=8010.93 00:16:10.605 clat (usec): min=173, max=7384, avg=325.79, stdev=233.35 00:16:10.605 lat (usec): min=187, max=7415, avg=346.10, stdev=233.56 00:16:10.605 clat percentiles (usec): 00:16:10.605 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 265], 00:16:10.605 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 314], 00:16:10.605 | 70.00th=[ 334], 80.00th=[ 379], 90.00th=[ 412], 95.00th=[ 429], 00:16:10.605 | 99.00th=[ 465], 99.50th=[ 537], 99.90th=[ 3949], 99.95th=[ 7373], 00:16:10.605 | 99.99th=[ 7373] 00:16:10.605 write: IOPS=1859, BW=7437KiB/s (7615kB/s)(7444KiB/1001msec); 0 zone resets 00:16:10.605 slat (usec): min=12, max=106, avg=27.76, stdev=11.68 00:16:10.605 clat (usec): min=107, max=480, avg=219.70, stdev=61.04 00:16:10.605 lat (usec): min=134, max=523, avg=247.46, stdev=65.06 00:16:10.605 clat percentiles (usec): 00:16:10.605 | 1.00th=[ 120], 5.00th=[ 130], 10.00th=[ 139], 20.00th=[ 157], 00:16:10.605 | 30.00th=[ 184], 40.00th=[ 200], 50.00th=[ 219], 60.00th=[ 237], 00:16:10.605 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 310], 00:16:10.605 | 99.00th=[ 388], 99.50th=[ 412], 99.90th=[ 474], 99.95th=[ 482], 00:16:10.605 | 99.99th=[ 482] 00:16:10.605 bw ( KiB/s): min= 8192, max= 8192, per=22.21%, avg=8192.00, stdev= 0.00, samples=1 00:16:10.605 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:10.605 lat (usec) : 250=40.09%, 500=59.64%, 750=0.09%, 1000=0.03% 00:16:10.605 lat (msec) : 2=0.03%, 4=0.09%, 10=0.03% 00:16:10.605 cpu : usr=1.10%, sys=7.10%, ctx=3397, majf=0, minf=11 00:16:10.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.605 issued rwts: total=1536,1861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.605 job2: (groupid=0, jobs=1): err= 0: pid=93708: Wed Jul 10 14:34:22 2024 00:16:10.605 read: IOPS=1923, BW=7692KiB/s (7877kB/s)(7700KiB/1001msec) 00:16:10.605 slat (nsec): min=12423, max=41027, avg=16154.04, stdev=2600.00 00:16:10.605 clat (usec): min=163, max=481, avg=280.96, stdev=72.10 00:16:10.605 lat (usec): min=178, max=499, avg=297.11, stdev=72.65 00:16:10.605 clat percentiles (usec): 00:16:10.605 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 198], 00:16:10.605 | 30.00th=[ 225], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 302], 00:16:10.605 | 70.00th=[ 322], 80.00th=[ 351], 90.00th=[ 379], 95.00th=[ 396], 00:16:10.605 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 465], 99.95th=[ 482], 00:16:10.605 | 99.99th=[ 482] 00:16:10.605 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:10.605 slat (nsec): min=12617, max=95339, avg=26054.38, stdev=7939.59 00:16:10.605 clat (usec): min=114, max=2366, avg=179.21, stdev=71.61 00:16:10.605 lat (usec): min=136, max=2405, avg=205.26, stdev=71.01 00:16:10.605 clat percentiles (usec): 00:16:10.605 | 1.00th=[ 121], 5.00th=[ 127], 10.00th=[ 131], 20.00th=[ 137], 00:16:10.605 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 157], 60.00th=[ 169], 00:16:10.605 | 70.00th=[ 192], 80.00th=[ 221], 90.00th=[ 265], 95.00th=[ 285], 00:16:10.605 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 375], 99.95th=[ 717], 00:16:10.605 | 99.99th=[ 2376] 00:16:10.605 bw ( KiB/s): min= 9592, max= 9592, per=26.01%, avg=9592.00, stdev= 0.00, samples=1 00:16:10.605 iops : min= 2398, max= 2398, avg=2398.00, stdev= 0.00, samples=1 00:16:10.605 lat (usec) : 250=61.52%, 500=38.43%, 750=0.03% 00:16:10.605 lat (msec) : 4=0.03% 00:16:10.605 cpu : usr=1.80%, sys=6.50%, ctx=3975, majf=0, minf=11 00:16:10.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.605 issued rwts: total=1925,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.605 job3: (groupid=0, jobs=1): err= 0: pid=93709: Wed Jul 10 14:34:22 2024 00:16:10.605 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:10.605 slat (nsec): min=14087, max=92244, avg=17785.78, stdev=3722.73 00:16:10.606 clat (usec): min=136, max=399, avg=187.96, stdev=25.75 00:16:10.606 lat (usec): min=166, max=418, avg=205.75, stdev=26.59 00:16:10.606 clat percentiles (usec): 00:16:10.606 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:16:10.606 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:16:10.606 | 70.00th=[ 192], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 241], 00:16:10.606 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 277], 99.95th=[ 338], 00:16:10.606 | 99.99th=[ 400] 00:16:10.606 write: IOPS=2757, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec); 0 zone resets 00:16:10.606 slat (usec): min=20, max=124, avg=25.99, stdev= 6.47 00:16:10.606 clat (usec): min=95, max=397, avg=141.65, stdev=20.32 00:16:10.606 lat (usec): min=133, max=420, avg=167.64, stdev=23.36 00:16:10.606 clat percentiles (usec): 00:16:10.606 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 126], 00:16:10.606 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:16:10.606 | 70.00th=[ 149], 80.00th=[ 159], 90.00th=[ 172], 95.00th=[ 182], 00:16:10.606 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 217], 99.95th=[ 375], 00:16:10.606 | 99.99th=[ 400] 00:16:10.606 bw ( KiB/s): min=11712, max=11712, per=31.76%, avg=11712.00, stdev= 0.00, samples=1 00:16:10.606 iops : min= 2928, max= 2928, avg=2928.00, stdev= 0.00, samples=1 00:16:10.606 lat (usec) : 100=0.02%, 250=99.06%, 500=0.92% 00:16:10.606 cpu : usr=2.00%, sys=9.10%, ctx=5323, majf=0, minf=6 00:16:10.606 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.606 issued rwts: total=2560,2760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.606 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.606 00:16:10.606 Run status group 0 (all jobs): 00:16:10.606 READ: bw=32.3MiB/s (33.9MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.3MiB (33.9MB), run=1001-1001msec 00:16:10.606 WRITE: bw=36.0MiB/s (37.8MB/s), 7437KiB/s-10.8MiB/s (7615kB/s-11.3MB/s), io=36.1MiB (37.8MB), run=1001-1001msec 00:16:10.606 00:16:10.606 Disk stats (read/write): 00:16:10.606 nvme0n1: ios=1943/2048, merge=0/0, ticks=422/382, in_queue=804, util=86.14% 00:16:10.606 nvme0n2: ios=1286/1536, merge=0/0, ticks=441/365, in_queue=806, util=86.09% 00:16:10.606 nvme0n3: ios=1536/1914, merge=0/0, ticks=431/353, in_queue=784, util=88.72% 00:16:10.606 nvme0n4: ios=2048/2385, merge=0/0, ticks=402/365, in_queue=767, util=89.48% 00:16:10.606 14:34:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:10.606 [global] 00:16:10.606 thread=1 00:16:10.606 invalidate=1 00:16:10.606 rw=randwrite 00:16:10.606 time_based=1 00:16:10.606 runtime=1 00:16:10.606 ioengine=libaio 00:16:10.606 direct=1 00:16:10.606 bs=4096 00:16:10.606 iodepth=1 00:16:10.606 norandommap=0 00:16:10.606 numjobs=1 00:16:10.606 00:16:10.606 verify_dump=1 00:16:10.606 verify_backlog=512 00:16:10.606 verify_state_save=0 00:16:10.606 do_verify=1 00:16:10.606 verify=crc32c-intel 00:16:10.606 [job0] 00:16:10.606 filename=/dev/nvme0n1 00:16:10.606 [job1] 00:16:10.606 filename=/dev/nvme0n2 00:16:10.606 [job2] 00:16:10.606 filename=/dev/nvme0n3 00:16:10.606 [job3] 00:16:10.606 filename=/dev/nvme0n4 00:16:10.606 Could not set queue depth (nvme0n1) 00:16:10.606 Could not set queue depth (nvme0n2) 00:16:10.606 Could not set queue depth (nvme0n3) 00:16:10.606 Could not set queue depth (nvme0n4) 00:16:10.606 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.606 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.606 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.606 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.606 fio-3.35 00:16:10.606 Starting 4 threads 00:16:11.978 00:16:11.978 job0: (groupid=0, jobs=1): err= 0: pid=93762: Wed Jul 10 14:34:23 2024 00:16:11.978 read: IOPS=1053, BW=4216KiB/s (4317kB/s)(4220KiB/1001msec) 00:16:11.978 slat (nsec): min=10032, max=64708, avg=17869.69, stdev=5832.45 00:16:11.978 clat (usec): min=150, max=1897, avg=431.18, stdev=97.89 00:16:11.978 lat (usec): min=178, max=1927, avg=449.05, stdev=98.88 00:16:11.978 clat percentiles (usec): 00:16:11.978 | 1.00th=[ 289], 5.00th=[ 330], 10.00th=[ 355], 20.00th=[ 375], 00:16:11.978 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 424], 00:16:11.978 | 70.00th=[ 453], 80.00th=[ 478], 90.00th=[ 545], 95.00th=[ 603], 00:16:11.978 | 99.00th=[ 758], 99.50th=[ 816], 99.90th=[ 971], 99.95th=[ 1893], 00:16:11.978 | 99.99th=[ 1893] 00:16:11.978 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:11.978 slat (nsec): min=14149, max=89220, avg=29719.40, stdev=11204.71 00:16:11.979 clat (usec): min=116, max=1138, avg=308.91, stdev=57.83 00:16:11.979 lat (usec): min=154, max=1173, avg=338.63, stdev=57.70 00:16:11.979 clat percentiles (usec): 00:16:11.979 | 1.00th=[ 219], 5.00th=[ 251], 10.00th=[ 260], 20.00th=[ 269], 00:16:11.979 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:16:11.979 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 388], 95.00th=[ 416], 00:16:11.979 | 99.00th=[ 482], 99.50th=[ 498], 99.90th=[ 586], 99.95th=[ 1139], 00:16:11.979 | 99.99th=[ 1139] 00:16:11.979 bw ( KiB/s): min= 7368, max= 7368, per=23.99%, avg=7368.00, stdev= 0.00, samples=1 00:16:11.979 iops : min= 1842, max= 1842, avg=1842.00, stdev= 0.00, samples=1 00:16:11.979 lat (usec) : 250=2.82%, 500=90.66%, 750=6.02%, 1000=0.42% 00:16:11.979 lat (msec) : 2=0.08% 00:16:11.979 cpu : usr=1.40%, sys=5.30%, ctx=2591, majf=0, minf=13 00:16:11.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.979 issued rwts: total=1055,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.979 job1: (groupid=0, jobs=1): err= 0: pid=93763: Wed Jul 10 14:34:23 2024 00:16:11.979 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:11.979 slat (nsec): min=14567, max=60901, avg=17343.55, stdev=3398.29 00:16:11.979 clat (usec): min=121, max=337, avg=157.15, stdev=14.62 00:16:11.979 lat (usec): min=150, max=370, avg=174.49, stdev=15.29 00:16:11.979 clat percentiles (usec): 00:16:11.979 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:16:11.979 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:16:11.979 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 180], 00:16:11.979 | 99.00th=[ 223], 99.50th=[ 233], 99.90th=[ 247], 99.95th=[ 253], 00:16:11.979 | 99.99th=[ 338] 00:16:11.979 write: IOPS=3142, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec); 0 zone resets 00:16:11.979 slat (usec): min=20, max=1151, avg=26.10, stdev=24.86 00:16:11.979 clat (usec): min=3, max=1762, avg=117.59, stdev=39.12 00:16:11.979 lat (usec): min=115, max=1792, avg=143.69, stdev=45.60 00:16:11.979 clat percentiles (usec): 00:16:11.979 | 1.00th=[ 99], 5.00th=[ 103], 10.00th=[ 104], 20.00th=[ 108], 00:16:11.979 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 115], 60.00th=[ 118], 00:16:11.979 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 129], 95.00th=[ 135], 00:16:11.979 | 99.00th=[ 157], 99.50th=[ 241], 99.90th=[ 498], 99.95th=[ 832], 00:16:11.979 | 99.99th=[ 1762] 00:16:11.979 bw ( KiB/s): min=12856, max=12856, per=41.86%, avg=12856.00, stdev= 0.00, samples=1 00:16:11.979 iops : min= 3214, max= 3214, avg=3214.00, stdev= 0.00, samples=1 00:16:11.979 lat (usec) : 4=0.02%, 50=0.02%, 100=0.72%, 250=98.97%, 500=0.23% 00:16:11.979 lat (usec) : 750=0.02%, 1000=0.02% 00:16:11.979 lat (msec) : 2=0.02% 00:16:11.979 cpu : usr=2.20%, sys=10.40%, ctx=6223, majf=0, minf=13 00:16:11.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.979 issued rwts: total=3072,3146,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.979 job2: (groupid=0, jobs=1): err= 0: pid=93764: Wed Jul 10 14:34:23 2024 00:16:11.979 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:11.979 slat (nsec): min=14248, max=64691, avg=26120.27, stdev=9430.02 00:16:11.979 clat (usec): min=161, max=2517, avg=465.16, stdev=137.54 00:16:11.979 lat (usec): min=176, max=2543, avg=491.28, stdev=142.99 00:16:11.979 clat percentiles (usec): 00:16:11.979 | 1.00th=[ 343], 5.00th=[ 363], 10.00th=[ 371], 20.00th=[ 379], 00:16:11.979 | 30.00th=[ 388], 40.00th=[ 396], 50.00th=[ 416], 60.00th=[ 441], 00:16:11.979 | 70.00th=[ 469], 80.00th=[ 545], 90.00th=[ 644], 95.00th=[ 742], 00:16:11.979 | 99.00th=[ 848], 99.50th=[ 881], 99.90th=[ 1139], 99.95th=[ 2507], 00:16:11.979 | 99.99th=[ 2507] 00:16:11.979 write: IOPS=1465, BW=5862KiB/s (6003kB/s)(5868KiB/1001msec); 0 zone resets 00:16:11.979 slat (usec): min=23, max=156, avg=43.16, stdev=11.93 00:16:11.979 clat (usec): min=118, max=927, avg=290.28, stdev=59.78 00:16:11.979 lat (usec): min=152, max=973, avg=333.44, stdev=60.06 00:16:11.979 clat percentiles (usec): 00:16:11.979 | 1.00th=[ 153], 5.00th=[ 217], 10.00th=[ 239], 20.00th=[ 253], 00:16:11.979 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:16:11.979 | 70.00th=[ 302], 80.00th=[ 326], 90.00th=[ 371], 95.00th=[ 396], 00:16:11.979 | 99.00th=[ 453], 99.50th=[ 469], 99.90th=[ 816], 99.95th=[ 930], 00:16:11.979 | 99.99th=[ 930] 00:16:11.979 bw ( KiB/s): min= 7344, max= 7344, per=23.91%, avg=7344.00, stdev= 0.00, samples=1 00:16:11.979 iops : min= 1836, max= 1836, avg=1836.00, stdev= 0.00, samples=1 00:16:11.979 lat (usec) : 250=10.84%, 500=78.72%, 750=8.55%, 1000=1.77% 00:16:11.979 lat (msec) : 2=0.08%, 4=0.04% 00:16:11.979 cpu : usr=1.60%, sys=7.10%, ctx=2498, majf=0, minf=9 00:16:11.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.979 issued rwts: total=1024,1467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.979 job3: (groupid=0, jobs=1): err= 0: pid=93765: Wed Jul 10 14:34:23 2024 00:16:11.979 read: IOPS=1052, BW=4212KiB/s (4313kB/s)(4216KiB/1001msec) 00:16:11.979 slat (nsec): min=12424, max=61470, avg=18513.71, stdev=5928.28 00:16:11.979 clat (usec): min=248, max=1943, avg=430.68, stdev=95.98 00:16:11.979 lat (usec): min=271, max=1960, avg=449.19, stdev=97.28 00:16:11.979 clat percentiles (usec): 00:16:11.979 | 1.00th=[ 297], 5.00th=[ 334], 10.00th=[ 359], 20.00th=[ 375], 00:16:11.979 | 30.00th=[ 388], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 424], 00:16:11.979 | 70.00th=[ 453], 80.00th=[ 474], 90.00th=[ 537], 95.00th=[ 603], 00:16:11.979 | 99.00th=[ 766], 99.50th=[ 824], 99.90th=[ 979], 99.95th=[ 1942], 00:16:11.979 | 99.99th=[ 1942] 00:16:11.979 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:11.979 slat (nsec): min=15954, max=98513, avg=32185.01, stdev=10945.69 00:16:11.979 clat (usec): min=127, max=1313, avg=306.36, stdev=64.07 00:16:11.979 lat (usec): min=174, max=1361, avg=338.55, stdev=65.19 00:16:11.979 clat percentiles (usec): 00:16:11.979 | 1.00th=[ 163], 5.00th=[ 243], 10.00th=[ 258], 20.00th=[ 269], 00:16:11.979 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 302], 00:16:11.979 | 70.00th=[ 314], 80.00th=[ 343], 90.00th=[ 392], 95.00th=[ 424], 00:16:11.979 | 99.00th=[ 486], 99.50th=[ 529], 99.90th=[ 611], 99.95th=[ 1319], 00:16:11.979 | 99.99th=[ 1319] 00:16:11.979 bw ( KiB/s): min= 7392, max= 7392, per=24.07%, avg=7392.00, stdev= 0.00, samples=1 00:16:11.979 iops : min= 1848, max= 1848, avg=1848.00, stdev= 0.00, samples=1 00:16:11.979 lat (usec) : 250=3.98%, 500=89.85%, 750=5.71%, 1000=0.39% 00:16:11.979 lat (msec) : 2=0.08% 00:16:11.979 cpu : usr=2.10%, sys=5.10%, ctx=2590, majf=0, minf=10 00:16:11.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.979 issued rwts: total=1054,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.979 00:16:11.979 Run status group 0 (all jobs): 00:16:11.979 READ: bw=24.2MiB/s (25.4MB/s), 4092KiB/s-12.0MiB/s (4190kB/s-12.6MB/s), io=24.2MiB (25.4MB), run=1001-1001msec 00:16:11.979 WRITE: bw=30.0MiB/s (31.4MB/s), 5862KiB/s-12.3MiB/s (6003kB/s-12.9MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:16:11.979 00:16:11.979 Disk stats (read/write): 00:16:11.979 nvme0n1: ios=1074/1234, merge=0/0, ticks=451/363, in_queue=814, util=88.98% 00:16:11.979 nvme0n2: ios=2609/2902, merge=0/0, ticks=434/370, in_queue=804, util=89.61% 00:16:11.979 nvme0n3: ios=1059/1106, merge=0/0, ticks=512/335, in_queue=847, util=90.38% 00:16:11.979 nvme0n4: ios=1024/1236, merge=0/0, ticks=424/359, in_queue=783, util=89.81% 00:16:11.979 14:34:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:11.979 [global] 00:16:11.979 thread=1 00:16:11.979 invalidate=1 00:16:11.979 rw=write 00:16:11.979 time_based=1 00:16:11.979 runtime=1 00:16:11.979 ioengine=libaio 00:16:11.979 direct=1 00:16:11.979 bs=4096 00:16:11.979 iodepth=128 00:16:11.979 norandommap=0 00:16:11.979 numjobs=1 00:16:11.979 00:16:11.979 verify_dump=1 00:16:11.979 verify_backlog=512 00:16:11.979 verify_state_save=0 00:16:11.979 do_verify=1 00:16:11.979 verify=crc32c-intel 00:16:11.979 [job0] 00:16:11.979 filename=/dev/nvme0n1 00:16:11.979 [job1] 00:16:11.979 filename=/dev/nvme0n2 00:16:11.979 [job2] 00:16:11.979 filename=/dev/nvme0n3 00:16:11.979 [job3] 00:16:11.979 filename=/dev/nvme0n4 00:16:11.979 Could not set queue depth (nvme0n1) 00:16:11.979 Could not set queue depth (nvme0n2) 00:16:11.979 Could not set queue depth (nvme0n3) 00:16:11.979 Could not set queue depth (nvme0n4) 00:16:11.979 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.979 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.979 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.979 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.979 fio-3.35 00:16:11.979 Starting 4 threads 00:16:13.354 00:16:13.354 job0: (groupid=0, jobs=1): err= 0: pid=93825: Wed Jul 10 14:34:25 2024 00:16:13.354 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:16:13.354 slat (usec): min=4, max=4949, avg=93.15, stdev=409.07 00:16:13.354 clat (usec): min=8756, max=29608, avg=12397.19, stdev=3416.19 00:16:13.354 lat (usec): min=8991, max=29614, avg=12490.34, stdev=3422.19 00:16:13.354 clat percentiles (usec): 00:16:13.354 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10945], 00:16:13.354 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:16:13.354 | 70.00th=[11863], 80.00th=[12125], 90.00th=[13566], 95.00th=[22676], 00:16:13.354 | 99.00th=[26346], 99.50th=[28181], 99.90th=[29492], 99.95th=[29492], 00:16:13.354 | 99.99th=[29492] 00:16:13.354 write: IOPS=5457, BW=21.3MiB/s (22.4MB/s)(21.4MiB/1006msec); 0 zone resets 00:16:13.354 slat (usec): min=5, max=2553, avg=88.02, stdev=306.47 00:16:13.354 clat (usec): min=3930, max=25743, avg=11596.69, stdev=2657.64 00:16:13.354 lat (usec): min=6079, max=25776, avg=11684.71, stdev=2667.91 00:16:13.354 clat percentiles (usec): 00:16:13.354 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:16:13.354 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:16:13.354 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12518], 95.00th=[18482], 00:16:13.354 | 99.00th=[22414], 99.50th=[22676], 99.90th=[25560], 99.95th=[25822], 00:16:13.354 | 99.99th=[25822] 00:16:13.354 bw ( KiB/s): min=18832, max=24064, per=32.38%, avg=21448.00, stdev=3699.58, samples=2 00:16:13.354 iops : min= 4708, max= 6016, avg=5362.00, stdev=924.90, samples=2 00:16:13.354 lat (msec) : 4=0.01%, 10=13.72%, 20=81.23%, 50=5.04% 00:16:13.354 cpu : usr=4.88%, sys=14.53%, ctx=799, majf=0, minf=9 00:16:13.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:13.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.354 issued rwts: total=5120,5490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.355 job1: (groupid=0, jobs=1): err= 0: pid=93826: Wed Jul 10 14:34:25 2024 00:16:13.355 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:16:13.355 slat (usec): min=3, max=5457, avg=187.49, stdev=642.41 00:16:13.355 clat (usec): min=17690, max=29109, avg=23475.81, stdev=2316.22 00:16:13.355 lat (usec): min=17720, max=29926, avg=23663.30, stdev=2291.15 00:16:13.355 clat percentiles (usec): 00:16:13.355 | 1.00th=[19268], 5.00th=[20055], 10.00th=[20579], 20.00th=[21627], 00:16:13.355 | 30.00th=[22152], 40.00th=[22676], 50.00th=[23200], 60.00th=[23725], 00:16:13.355 | 70.00th=[24249], 80.00th=[25297], 90.00th=[26870], 95.00th=[28181], 00:16:13.355 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:16:13.355 | 99.99th=[29230] 00:16:13.355 write: IOPS=2891, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1004msec); 0 zone resets 00:16:13.355 slat (usec): min=9, max=6042, avg=171.34, stdev=563.50 00:16:13.355 clat (usec): min=2875, max=28812, avg=22847.04, stdev=2827.45 00:16:13.355 lat (usec): min=3900, max=29028, avg=23018.38, stdev=2788.17 00:16:13.355 clat percentiles (usec): 00:16:13.355 | 1.00th=[ 6980], 5.00th=[19792], 10.00th=[21365], 20.00th=[22152], 00:16:13.355 | 30.00th=[22414], 40.00th=[22676], 50.00th=[23200], 60.00th=[23462], 00:16:13.355 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[26346], 00:16:13.355 | 99.00th=[28181], 99.50th=[28443], 99.90th=[28705], 99.95th=[28705], 00:16:13.355 | 99.99th=[28705] 00:16:13.355 bw ( KiB/s): min= 9920, max=12288, per=16.77%, avg=11104.00, stdev=1674.43, samples=2 00:16:13.355 iops : min= 2480, max= 3072, avg=2776.00, stdev=418.61, samples=2 00:16:13.355 lat (msec) : 4=0.15%, 10=0.59%, 20=4.28%, 50=94.98% 00:16:13.355 cpu : usr=3.09%, sys=8.37%, ctx=1068, majf=0, minf=15 00:16:13.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:13.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.355 issued rwts: total=2560,2903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.355 job2: (groupid=0, jobs=1): err= 0: pid=93827: Wed Jul 10 14:34:25 2024 00:16:13.355 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:16:13.355 slat (usec): min=5, max=5430, avg=96.31, stdev=467.08 00:16:13.355 clat (usec): min=7729, max=17877, avg=12637.72, stdev=1462.22 00:16:13.355 lat (usec): min=7752, max=17902, avg=12734.03, stdev=1498.42 00:16:13.355 clat percentiles (usec): 00:16:13.355 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[11207], 20.00th=[11863], 00:16:13.355 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:16:13.355 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14615], 95.00th=[15139], 00:16:13.355 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:16:13.355 | 99.99th=[17957] 00:16:13.355 write: IOPS=5181, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1002msec); 0 zone resets 00:16:13.355 slat (usec): min=11, max=4770, avg=89.44, stdev=419.03 00:16:13.355 clat (usec): min=1414, max=17562, avg=11937.72, stdev=1458.27 00:16:13.355 lat (usec): min=1437, max=17975, avg=12027.16, stdev=1496.19 00:16:13.355 clat percentiles (usec): 00:16:13.355 | 1.00th=[ 7242], 5.00th=[ 9372], 10.00th=[10814], 20.00th=[11469], 00:16:13.355 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:16:13.355 | 70.00th=[12387], 80.00th=[12518], 90.00th=[13042], 95.00th=[13960], 00:16:13.355 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:16:13.355 | 99.99th=[17433] 00:16:13.355 bw ( KiB/s): min=20480, max=20521, per=30.95%, avg=20500.50, stdev=28.99, samples=2 00:16:13.355 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:16:13.355 lat (msec) : 2=0.13%, 10=5.81%, 20=94.07% 00:16:13.355 cpu : usr=5.00%, sys=15.08%, ctx=584, majf=0, minf=9 00:16:13.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:13.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.355 issued rwts: total=5120,5192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.355 job3: (groupid=0, jobs=1): err= 0: pid=93828: Wed Jul 10 14:34:25 2024 00:16:13.355 read: IOPS=2938, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1002msec) 00:16:13.355 slat (usec): min=3, max=6324, avg=166.37, stdev=610.03 00:16:13.355 clat (usec): min=414, max=28942, avg=20395.43, stdev=5733.56 00:16:13.355 lat (usec): min=2706, max=29131, avg=20561.81, stdev=5755.71 00:16:13.355 clat percentiles (usec): 00:16:13.355 | 1.00th=[ 3425], 5.00th=[11076], 10.00th=[12649], 20.00th=[13042], 00:16:13.355 | 30.00th=[19530], 40.00th=[21627], 50.00th=[22676], 60.00th=[23200], 00:16:13.355 | 70.00th=[23725], 80.00th=[24511], 90.00th=[26346], 95.00th=[28181], 00:16:13.355 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:16:13.355 | 99.99th=[28967] 00:16:13.355 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:16:13.355 slat (usec): min=6, max=5972, avg=158.22, stdev=540.90 00:16:13.355 clat (usec): min=9701, max=28603, avg=21469.74, stdev=4226.49 00:16:13.355 lat (usec): min=9729, max=28632, avg=21627.96, stdev=4225.54 00:16:13.355 clat percentiles (usec): 00:16:13.355 | 1.00th=[11469], 5.00th=[12256], 10.00th=[12780], 20.00th=[20055], 00:16:13.355 | 30.00th=[21890], 40.00th=[22414], 50.00th=[22938], 60.00th=[23462], 00:16:13.355 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[26608], 00:16:13.355 | 99.00th=[27657], 99.50th=[28181], 99.90th=[28181], 99.95th=[28443], 00:16:13.355 | 99.99th=[28705] 00:16:13.355 bw ( KiB/s): min=12288, max=12312, per=18.57%, avg=12300.00, stdev=16.97, samples=2 00:16:13.355 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:16:13.355 lat (usec) : 500=0.02% 00:16:13.355 lat (msec) : 4=0.53%, 10=1.28%, 20=24.73%, 50=73.44% 00:16:13.355 cpu : usr=2.80%, sys=9.49%, ctx=990, majf=0, minf=17 00:16:13.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:13.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.355 issued rwts: total=2944,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.355 00:16:13.355 Run status group 0 (all jobs): 00:16:13.355 READ: bw=61.1MiB/s (64.1MB/s), 9.96MiB/s-20.0MiB/s (10.4MB/s-20.9MB/s), io=61.5MiB (64.5MB), run=1002-1006msec 00:16:13.355 WRITE: bw=64.7MiB/s (67.8MB/s), 11.3MiB/s-21.3MiB/s (11.8MB/s-22.4MB/s), io=65.1MiB (68.2MB), run=1002-1006msec 00:16:13.355 00:16:13.355 Disk stats (read/write): 00:16:13.355 nvme0n1: ios=4754/5120, merge=0/0, ticks=12610/12315, in_queue=24925, util=89.37% 00:16:13.355 nvme0n2: ios=2151/2560, merge=0/0, ticks=12125/13533, in_queue=25658, util=89.47% 00:16:13.355 nvme0n3: ios=4307/4608, merge=0/0, ticks=25525/23582, in_queue=49107, util=89.20% 00:16:13.355 nvme0n4: ios=2112/2560, merge=0/0, ticks=12307/13285, in_queue=25592, util=89.75% 00:16:13.355 14:34:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:13.355 [global] 00:16:13.355 thread=1 00:16:13.355 invalidate=1 00:16:13.355 rw=randwrite 00:16:13.355 time_based=1 00:16:13.355 runtime=1 00:16:13.355 ioengine=libaio 00:16:13.355 direct=1 00:16:13.355 bs=4096 00:16:13.355 iodepth=128 00:16:13.355 norandommap=0 00:16:13.355 numjobs=1 00:16:13.355 00:16:13.355 verify_dump=1 00:16:13.355 verify_backlog=512 00:16:13.355 verify_state_save=0 00:16:13.355 do_verify=1 00:16:13.355 verify=crc32c-intel 00:16:13.355 [job0] 00:16:13.355 filename=/dev/nvme0n1 00:16:13.355 [job1] 00:16:13.355 filename=/dev/nvme0n2 00:16:13.355 [job2] 00:16:13.355 filename=/dev/nvme0n3 00:16:13.356 [job3] 00:16:13.356 filename=/dev/nvme0n4 00:16:13.356 Could not set queue depth (nvme0n1) 00:16:13.356 Could not set queue depth (nvme0n2) 00:16:13.356 Could not set queue depth (nvme0n3) 00:16:13.356 Could not set queue depth (nvme0n4) 00:16:13.356 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.356 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.356 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.356 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.356 fio-3.35 00:16:13.356 Starting 4 threads 00:16:14.775 00:16:14.775 job0: (groupid=0, jobs=1): err= 0: pid=93885: Wed Jul 10 14:34:26 2024 00:16:14.775 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:16:14.775 slat (usec): min=3, max=8759, avg=182.79, stdev=844.72 00:16:14.775 clat (usec): min=15845, max=36795, avg=22748.13, stdev=3411.32 00:16:14.775 lat (usec): min=15860, max=36808, avg=22930.93, stdev=3488.76 00:16:14.775 clat percentiles (usec): 00:16:14.775 | 1.00th=[16450], 5.00th=[17695], 10.00th=[18482], 20.00th=[20317], 00:16:14.775 | 30.00th=[21365], 40.00th=[21890], 50.00th=[22414], 60.00th=[22938], 00:16:14.775 | 70.00th=[23200], 80.00th=[23987], 90.00th=[27657], 95.00th=[29492], 00:16:14.775 | 99.00th=[33162], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:16:14.775 | 99.99th=[36963] 00:16:14.775 write: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1006msec); 0 zone resets 00:16:14.775 slat (usec): min=5, max=9920, avg=173.92, stdev=728.74 00:16:14.775 clat (usec): min=1357, max=34517, avg=23252.40, stdev=3725.30 00:16:14.775 lat (usec): min=6799, max=34630, avg=23426.32, stdev=3768.38 00:16:14.775 clat percentiles (usec): 00:16:14.775 | 1.00th=[ 9241], 5.00th=[17433], 10.00th=[19006], 20.00th=[21103], 00:16:14.775 | 30.00th=[22414], 40.00th=[23200], 50.00th=[23725], 60.00th=[24249], 00:16:14.775 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26084], 95.00th=[27919], 00:16:14.775 | 99.00th=[33424], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:16:14.775 | 99.99th=[34341] 00:16:14.775 bw ( KiB/s): min=10312, max=12304, per=17.55%, avg=11308.00, stdev=1408.56, samples=2 00:16:14.775 iops : min= 2578, max= 3076, avg=2827.00, stdev=352.14, samples=2 00:16:14.775 lat (msec) : 2=0.02%, 10=1.05%, 20=14.80%, 50=84.13% 00:16:14.775 cpu : usr=3.28%, sys=7.06%, ctx=945, majf=0, minf=17 00:16:14.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:14.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.775 issued rwts: total=2560,2952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.775 job1: (groupid=0, jobs=1): err= 0: pid=93887: Wed Jul 10 14:34:26 2024 00:16:14.775 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:16:14.775 slat (usec): min=2, max=9039, avg=185.47, stdev=831.37 00:16:14.775 clat (usec): min=14811, max=33929, avg=22927.51, stdev=3238.93 00:16:14.775 lat (usec): min=14893, max=35150, avg=23112.98, stdev=3318.77 00:16:14.775 clat percentiles (usec): 00:16:14.775 | 1.00th=[15664], 5.00th=[17957], 10.00th=[19006], 20.00th=[20841], 00:16:14.775 | 30.00th=[21627], 40.00th=[22152], 50.00th=[22676], 60.00th=[22938], 00:16:14.775 | 70.00th=[23462], 80.00th=[25035], 90.00th=[28181], 95.00th=[28967], 00:16:14.775 | 99.00th=[31851], 99.50th=[32375], 99.90th=[33162], 99.95th=[33424], 00:16:14.775 | 99.99th=[33817] 00:16:14.775 write: IOPS=3007, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1007msec); 0 zone resets 00:16:14.775 slat (usec): min=5, max=10672, avg=167.67, stdev=691.09 00:16:14.775 clat (usec): min=1327, max=34349, avg=22517.48, stdev=3931.80 00:16:14.775 lat (usec): min=6798, max=34381, avg=22685.15, stdev=3977.58 00:16:14.775 clat percentiles (usec): 00:16:14.775 | 1.00th=[ 7439], 5.00th=[16450], 10.00th=[17695], 20.00th=[19530], 00:16:14.775 | 30.00th=[21103], 40.00th=[22676], 50.00th=[23462], 60.00th=[24249], 00:16:14.775 | 70.00th=[24773], 80.00th=[25035], 90.00th=[26084], 95.00th=[26870], 00:16:14.775 | 99.00th=[31065], 99.50th=[31851], 99.90th=[33424], 99.95th=[34341], 00:16:14.775 | 99.99th=[34341] 00:16:14.775 bw ( KiB/s): min=10920, max=12288, per=18.01%, avg=11604.00, stdev=967.32, samples=2 00:16:14.775 iops : min= 2730, max= 3072, avg=2901.00, stdev=241.83, samples=2 00:16:14.775 lat (msec) : 2=0.02%, 10=1.13%, 20=17.50%, 50=81.36% 00:16:14.775 cpu : usr=2.68%, sys=7.46%, ctx=935, majf=0, minf=13 00:16:14.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:14.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.775 issued rwts: total=2560,3029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.775 job2: (groupid=0, jobs=1): err= 0: pid=93888: Wed Jul 10 14:34:26 2024 00:16:14.775 read: IOPS=5038, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1004msec) 00:16:14.775 slat (usec): min=5, max=5599, avg=96.77, stdev=473.63 00:16:14.775 clat (usec): min=2878, max=19994, avg=12707.75, stdev=1672.29 00:16:14.775 lat (usec): min=2892, max=20021, avg=12804.52, stdev=1711.31 00:16:14.775 clat percentiles (usec): 00:16:14.775 | 1.00th=[ 7898], 5.00th=[10290], 10.00th=[11207], 20.00th=[11863], 00:16:14.775 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:16:14.775 | 70.00th=[13042], 80.00th=[13829], 90.00th=[14877], 95.00th=[15664], 00:16:14.775 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17957], 99.95th=[18220], 00:16:14.775 | 99.99th=[20055] 00:16:14.775 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:16:14.775 slat (usec): min=9, max=5147, avg=91.61, stdev=455.34 00:16:14.775 clat (usec): min=7300, max=18586, avg=12232.59, stdev=1239.27 00:16:14.775 lat (usec): min=7325, max=18606, avg=12324.20, stdev=1302.41 00:16:14.775 clat percentiles (usec): 00:16:14.775 | 1.00th=[ 8717], 5.00th=[10421], 10.00th=[11076], 20.00th=[11600], 00:16:14.775 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:16:14.775 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13566], 95.00th=[14091], 00:16:14.775 | 99.00th=[16712], 99.50th=[17171], 99.90th=[18482], 99.95th=[18482], 00:16:14.775 | 99.99th=[18482] 00:16:14.775 bw ( KiB/s): min=20360, max=20600, per=31.78%, avg=20480.00, stdev=169.71, samples=2 00:16:14.775 iops : min= 5090, max= 5150, avg=5120.00, stdev=42.43, samples=2 00:16:14.775 lat (msec) : 4=0.18%, 10=3.35%, 20=96.47% 00:16:14.775 cpu : usr=3.99%, sys=15.15%, ctx=527, majf=0, minf=9 00:16:14.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:14.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.775 issued rwts: total=5059,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.775 job3: (groupid=0, jobs=1): err= 0: pid=93889: Wed Jul 10 14:34:26 2024 00:16:14.775 read: IOPS=5066, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1005msec) 00:16:14.775 slat (usec): min=4, max=11626, avg=107.07, stdev=680.44 00:16:14.775 clat (usec): min=3284, max=24431, avg=13372.98, stdev=3490.61 00:16:14.775 lat (usec): min=5075, max=24446, avg=13480.04, stdev=3521.46 00:16:14.775 clat percentiles (usec): 00:16:14.775 | 1.00th=[ 5866], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:16:14.775 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12387], 60.00th=[12911], 00:16:14.775 | 70.00th=[14484], 80.00th=[15270], 90.00th=[19006], 95.00th=[21103], 00:16:14.775 | 99.00th=[22938], 99.50th=[23200], 99.90th=[23987], 99.95th=[24511], 00:16:14.775 | 99.99th=[24511] 00:16:14.775 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:16:14.775 slat (usec): min=5, max=8623, avg=82.01, stdev=312.02 00:16:14.775 clat (usec): min=4169, max=24349, avg=11569.94, stdev=2423.39 00:16:14.775 lat (usec): min=4212, max=24360, avg=11651.95, stdev=2444.18 00:16:14.775 clat percentiles (usec): 00:16:14.775 | 1.00th=[ 4948], 5.00th=[ 6063], 10.00th=[ 7111], 20.00th=[10028], 00:16:14.775 | 30.00th=[11600], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:16:14.775 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13435], 00:16:14.775 | 99.00th=[13698], 99.50th=[13829], 99.90th=[23200], 99.95th=[23725], 00:16:14.775 | 99.99th=[24249] 00:16:14.776 bw ( KiB/s): min=20439, max=20480, per=31.75%, avg=20459.50, stdev=28.99, samples=2 00:16:14.776 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:16:14.776 lat (msec) : 4=0.01%, 10=15.64%, 20=80.58%, 50=3.77% 00:16:14.776 cpu : usr=3.88%, sys=11.55%, ctx=757, majf=0, minf=13 00:16:14.776 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:14.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.776 issued rwts: total=5092,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.776 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.776 00:16:14.776 Run status group 0 (all jobs): 00:16:14.776 READ: bw=59.2MiB/s (62.1MB/s), 9.93MiB/s-19.8MiB/s (10.4MB/s-20.8MB/s), io=59.7MiB (62.6MB), run=1004-1007msec 00:16:14.776 WRITE: bw=62.9MiB/s (66.0MB/s), 11.5MiB/s-19.9MiB/s (12.0MB/s-20.9MB/s), io=63.4MiB (66.4MB), run=1004-1007msec 00:16:14.776 00:16:14.776 Disk stats (read/write): 00:16:14.776 nvme0n1: ios=2098/2502, merge=0/0, ticks=22607/27846, in_queue=50453, util=86.96% 00:16:14.776 nvme0n2: ios=2123/2560, merge=0/0, ticks=23247/27577, in_queue=50824, util=87.70% 00:16:14.776 nvme0n3: ios=4096/4608, merge=0/0, ticks=24173/23635, in_queue=47808, util=88.74% 00:16:14.776 nvme0n4: ios=4096/4559, merge=0/0, ticks=51145/51224, in_queue=102369, util=89.58% 00:16:14.776 14:34:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:14.776 14:34:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=93903 00:16:14.776 14:34:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:14.776 14:34:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:14.776 [global] 00:16:14.776 thread=1 00:16:14.776 invalidate=1 00:16:14.776 rw=read 00:16:14.776 time_based=1 00:16:14.776 runtime=10 00:16:14.776 ioengine=libaio 00:16:14.776 direct=1 00:16:14.776 bs=4096 00:16:14.776 iodepth=1 00:16:14.776 norandommap=1 00:16:14.776 numjobs=1 00:16:14.776 00:16:14.776 [job0] 00:16:14.776 filename=/dev/nvme0n1 00:16:14.776 [job1] 00:16:14.776 filename=/dev/nvme0n2 00:16:14.776 [job2] 00:16:14.776 filename=/dev/nvme0n3 00:16:14.776 [job3] 00:16:14.776 filename=/dev/nvme0n4 00:16:14.776 Could not set queue depth (nvme0n1) 00:16:14.776 Could not set queue depth (nvme0n2) 00:16:14.776 Could not set queue depth (nvme0n3) 00:16:14.776 Could not set queue depth (nvme0n4) 00:16:14.776 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.776 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.776 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.776 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.776 fio-3.35 00:16:14.776 Starting 4 threads 00:16:18.078 14:34:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:18.078 fio: pid=93946, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:18.078 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=63746048, buflen=4096 00:16:18.078 14:34:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:18.078 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=49065984, buflen=4096 00:16:18.078 fio: pid=93945, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:18.078 14:34:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:18.078 14:34:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:18.336 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=65191936, buflen=4096 00:16:18.336 fio: pid=93943, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:18.336 14:34:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:18.337 14:34:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:18.595 fio: pid=93944, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:18.595 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=53817344, buflen=4096 00:16:18.595 14:34:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:18.595 14:34:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:18.595 00:16:18.595 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93943: Wed Jul 10 14:34:30 2024 00:16:18.595 read: IOPS=4614, BW=18.0MiB/s (18.9MB/s)(62.2MiB/3449msec) 00:16:18.595 slat (usec): min=8, max=10485, avg=21.00, stdev=152.93 00:16:18.595 clat (usec): min=125, max=2780, avg=193.82, stdev=60.54 00:16:18.595 lat (usec): min=139, max=10678, avg=214.82, stdev=166.04 00:16:18.595 clat percentiles (usec): 00:16:18.595 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:16:18.595 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 178], 00:16:18.595 | 70.00th=[ 188], 80.00th=[ 227], 90.00th=[ 277], 95.00th=[ 289], 00:16:18.595 | 99.00th=[ 338], 99.50th=[ 404], 99.90th=[ 660], 99.95th=[ 1106], 00:16:18.595 | 99.99th=[ 1876] 00:16:18.595 bw ( KiB/s): min=13064, max=21104, per=31.34%, avg=19174.67, stdev=3078.30, samples=6 00:16:18.595 iops : min= 3266, max= 5276, avg=4793.67, stdev=769.58, samples=6 00:16:18.595 lat (usec) : 250=82.79%, 500=17.02%, 750=0.12%, 1000=0.01% 00:16:18.595 lat (msec) : 2=0.05%, 4=0.01% 00:16:18.595 cpu : usr=2.03%, sys=7.08%, ctx=15950, majf=0, minf=1 00:16:18.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.595 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.595 issued rwts: total=15917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.595 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93944: Wed Jul 10 14:34:30 2024 00:16:18.595 read: IOPS=3551, BW=13.9MiB/s (14.5MB/s)(51.3MiB/3700msec) 00:16:18.595 slat (usec): min=7, max=11731, avg=20.37, stdev=196.35 00:16:18.595 clat (usec): min=3, max=3013, avg=259.52, stdev=65.23 00:16:18.595 lat (usec): min=142, max=12095, avg=279.89, stdev=206.89 00:16:18.595 clat percentiles (usec): 00:16:18.595 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 149], 20.00th=[ 249], 00:16:18.595 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:16:18.595 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 306], 00:16:18.595 | 99.00th=[ 379], 99.50th=[ 412], 99.90th=[ 510], 99.95th=[ 1057], 00:16:18.595 | 99.99th=[ 2900] 00:16:18.595 bw ( KiB/s): min=13112, max=16169, per=22.77%, avg=13930.43, stdev=1010.39, samples=7 00:16:18.595 iops : min= 3278, max= 4042, avg=3482.57, stdev=252.50, samples=7 00:16:18.595 lat (usec) : 4=0.01%, 10=0.01%, 250=20.08%, 500=79.77%, 750=0.07% 00:16:18.595 lat (usec) : 1000=0.01% 00:16:18.595 lat (msec) : 2=0.04%, 4=0.02% 00:16:18.595 cpu : usr=1.27%, sys=4.76%, ctx=13193, majf=0, minf=1 00:16:18.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.595 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.595 issued rwts: total=13140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.595 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93945: Wed Jul 10 14:34:30 2024 00:16:18.595 read: IOPS=3734, BW=14.6MiB/s (15.3MB/s)(46.8MiB/3208msec) 00:16:18.595 slat (usec): min=7, max=11146, avg=16.24, stdev=118.91 00:16:18.595 clat (usec): min=42, max=1995, avg=249.89, stdev=60.30 00:16:18.595 lat (usec): min=159, max=11414, avg=266.13, stdev=132.42 00:16:18.595 clat percentiles (usec): 00:16:18.595 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 176], 00:16:18.595 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:16:18.595 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:16:18.595 | 99.00th=[ 359], 99.50th=[ 392], 99.90th=[ 537], 99.95th=[ 938], 00:16:18.595 | 99.99th=[ 1942] 00:16:18.595 bw ( KiB/s): min=13512, max=20832, per=24.71%, avg=15116.00, stdev=2877.57, samples=6 00:16:18.595 iops : min= 3378, max= 5208, avg=3779.00, stdev=719.39, samples=6 00:16:18.595 lat (usec) : 50=0.01%, 250=29.03%, 500=70.83%, 750=0.06%, 1000=0.02% 00:16:18.595 lat (msec) : 2=0.04% 00:16:18.595 cpu : usr=1.25%, sys=4.99%, ctx=12025, majf=0, minf=1 00:16:18.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.595 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.595 issued rwts: total=11980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.595 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93946: Wed Jul 10 14:34:30 2024 00:16:18.595 read: IOPS=5308, BW=20.7MiB/s (21.7MB/s)(60.8MiB/2932msec) 00:16:18.595 slat (nsec): min=13616, max=74599, avg=16757.58, stdev=3318.31 00:16:18.595 clat (usec): min=138, max=2392, avg=170.03, stdev=35.47 00:16:18.595 lat (usec): min=153, max=2422, avg=186.79, stdev=35.79 00:16:18.595 clat percentiles (usec): 00:16:18.595 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:16:18.595 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:16:18.595 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 188], 00:16:18.595 | 99.00th=[ 200], 99.50th=[ 215], 99.90th=[ 322], 99.95th=[ 594], 00:16:18.595 | 99.99th=[ 2024] 00:16:18.595 bw ( KiB/s): min=20864, max=21496, per=34.73%, avg=21251.20, stdev=240.44, samples=5 00:16:18.595 iops : min= 5216, max= 5374, avg=5312.80, stdev=60.11, samples=5 00:16:18.595 lat (usec) : 250=99.79%, 500=0.13%, 750=0.03%, 1000=0.01% 00:16:18.595 lat (msec) : 2=0.02%, 4=0.01% 00:16:18.595 cpu : usr=1.47%, sys=7.37%, ctx=15568, majf=0, minf=1 00:16:18.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.595 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.595 issued rwts: total=15564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.595 00:16:18.595 Run status group 0 (all jobs): 00:16:18.595 READ: bw=59.8MiB/s (62.7MB/s), 13.9MiB/s-20.7MiB/s (14.5MB/s-21.7MB/s), io=221MiB (232MB), run=2932-3700msec 00:16:18.595 00:16:18.595 Disk stats (read/write): 00:16:18.595 nvme0n1: ios=15605/0, merge=0/0, ticks=3067/0, in_queue=3067, util=95.34% 00:16:18.595 nvme0n2: ios=12650/0, merge=0/0, ticks=3357/0, in_queue=3357, util=95.50% 00:16:18.595 nvme0n3: ios=11700/0, merge=0/0, ticks=2872/0, in_queue=2872, util=96.34% 00:16:18.595 nvme0n4: ios=15239/0, merge=0/0, ticks=2688/0, in_queue=2688, util=96.79% 00:16:18.853 14:34:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:18.853 14:34:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:19.110 14:34:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:19.110 14:34:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:19.368 14:34:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:19.368 14:34:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:19.626 14:34:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:19.626 14:34:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 93903 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:20.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.191 nvmf hotplug test: fio failed as expected 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:20.191 14:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:20.450 rmmod nvme_tcp 00:16:20.450 rmmod nvme_fabrics 00:16:20.450 rmmod nvme_keyring 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 93410 ']' 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 93410 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 93410 ']' 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 93410 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93410 00:16:20.450 killing process with pid 93410 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93410' 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 93410 00:16:20.450 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 93410 00:16:20.722 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:20.722 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:20.722 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:20.722 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.722 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:20.722 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.722 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.722 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.722 14:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:20.722 ************************************ 00:16:20.722 END TEST nvmf_fio_target 00:16:20.722 ************************************ 00:16:20.722 00:16:20.722 real 0m19.613s 00:16:20.722 user 1m15.977s 00:16:20.722 sys 0m8.707s 00:16:20.722 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:20.722 14:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.722 14:34:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:20.723 14:34:32 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:20.723 14:34:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:20.723 14:34:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.723 14:34:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:20.723 ************************************ 00:16:20.723 START TEST nvmf_bdevio 00:16:20.723 ************************************ 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:20.723 * Looking for test storage... 00:16:20.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:20.723 14:34:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:20.996 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:20.996 Cannot find device "nvmf_tgt_br" 00:16:20.996 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:16:20.996 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:20.997 Cannot find device "nvmf_tgt_br2" 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:20.997 Cannot find device "nvmf_tgt_br" 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:20.997 Cannot find device "nvmf_tgt_br2" 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:20.997 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:21.255 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:21.255 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:21.255 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:21.255 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:21.255 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:21.255 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:21.255 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:21.255 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:21.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:16:21.255 00:16:21.255 --- 10.0.0.2 ping statistics --- 00:16:21.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.255 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:16:21.255 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:21.255 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:21.255 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:16:21.255 00:16:21.255 --- 10.0.0.3 ping statistics --- 00:16:21.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.255 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:21.255 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:21.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:21.255 00:16:21.255 --- 10.0.0.1 ping statistics --- 00:16:21.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.255 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:21.255 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.255 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=94272 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 94272 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 94272 ']' 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.256 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.256 [2024-07-10 14:34:33.484839] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:16:21.256 [2024-07-10 14:34:33.484915] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.514 [2024-07-10 14:34:33.609274] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:21.514 [2024-07-10 14:34:33.627787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.514 [2024-07-10 14:34:33.671610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.514 [2024-07-10 14:34:33.671670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.514 [2024-07-10 14:34:33.671684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.514 [2024-07-10 14:34:33.671694] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.514 [2024-07-10 14:34:33.671703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.514 [2024-07-10 14:34:33.671887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:21.514 [2024-07-10 14:34:33.672027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:21.514 [2024-07-10 14:34:33.672592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:21.514 [2024-07-10 14:34:33.672598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.514 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.514 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:21.514 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:21.514 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:21.514 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.514 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.514 14:34:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.514 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.515 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.772 [2024-07-10 14:34:33.808358] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.772 Malloc0 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.772 [2024-07-10 14:34:33.872492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:21.772 { 00:16:21.772 "params": { 00:16:21.772 "name": "Nvme$subsystem", 00:16:21.772 "trtype": "$TEST_TRANSPORT", 00:16:21.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:21.772 "adrfam": "ipv4", 00:16:21.772 "trsvcid": "$NVMF_PORT", 00:16:21.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:21.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:21.772 "hdgst": ${hdgst:-false}, 00:16:21.772 "ddgst": ${ddgst:-false} 00:16:21.772 }, 00:16:21.772 "method": "bdev_nvme_attach_controller" 00:16:21.772 } 00:16:21.772 EOF 00:16:21.772 )") 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:21.772 14:34:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:21.772 "params": { 00:16:21.772 "name": "Nvme1", 00:16:21.772 "trtype": "tcp", 00:16:21.772 "traddr": "10.0.0.2", 00:16:21.772 "adrfam": "ipv4", 00:16:21.772 "trsvcid": "4420", 00:16:21.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.772 "hdgst": false, 00:16:21.772 "ddgst": false 00:16:21.772 }, 00:16:21.772 "method": "bdev_nvme_attach_controller" 00:16:21.772 }' 00:16:21.772 [2024-07-10 14:34:33.926906] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:16:21.772 [2024-07-10 14:34:33.926988] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94308 ] 00:16:21.773 [2024-07-10 14:34:34.046219] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:22.030 [2024-07-10 14:34:34.066506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:22.030 [2024-07-10 14:34:34.118128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.030 [2024-07-10 14:34:34.118252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.030 [2024-07-10 14:34:34.118271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.030 I/O targets: 00:16:22.030 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:22.030 00:16:22.030 00:16:22.030 CUnit - A unit testing framework for C - Version 2.1-3 00:16:22.030 http://cunit.sourceforge.net/ 00:16:22.030 00:16:22.030 00:16:22.030 Suite: bdevio tests on: Nvme1n1 00:16:22.288 Test: blockdev write read block ...passed 00:16:22.288 Test: blockdev write zeroes read block ...passed 00:16:22.288 Test: blockdev write zeroes read no split ...passed 00:16:22.288 Test: blockdev write zeroes read split ...passed 00:16:22.288 Test: blockdev write zeroes read split partial ...passed 00:16:22.288 Test: blockdev reset ...[2024-07-10 14:34:34.389828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:22.288 [2024-07-10 14:34:34.389966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776a50 (9): Bad file descriptor 00:16:22.288 [2024-07-10 14:34:34.403030] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:22.288 passed 00:16:22.288 Test: blockdev write read 8 blocks ...passed 00:16:22.288 Test: blockdev write read size > 128k ...passed 00:16:22.288 Test: blockdev write read invalid size ...passed 00:16:22.288 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:22.288 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:22.288 Test: blockdev write read max offset ...passed 00:16:22.288 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:22.288 Test: blockdev writev readv 8 blocks ...passed 00:16:22.288 Test: blockdev writev readv 30 x 1block ...passed 00:16:22.288 Test: blockdev writev readv block ...passed 00:16:22.288 Test: blockdev writev readv size > 128k ...passed 00:16:22.288 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:22.288 Test: blockdev comparev and writev ...[2024-07-10 14:34:34.575299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.288 [2024-07-10 14:34:34.575511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:22.288 [2024-07-10 14:34:34.575638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.288 [2024-07-10 14:34:34.575741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:22.288 [2024-07-10 14:34:34.576132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.288 [2024-07-10 14:34:34.576245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:22.288 [2024-07-10 14:34:34.576370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.288 [2024-07-10 14:34:34.576467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:22.288 [2024-07-10 14:34:34.576865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.288 [2024-07-10 14:34:34.576971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:22.288 [2024-07-10 14:34:34.577065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.288 [2024-07-10 14:34:34.577144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:22.288 [2024-07-10 14:34:34.577610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.288 [2024-07-10 14:34:34.577713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:22.288 [2024-07-10 14:34:34.577797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.288 [2024-07-10 14:34:34.577886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:22.547 passed 00:16:22.547 Test: blockdev nvme passthru rw ...passed 00:16:22.547 Test: blockdev nvme passthru vendor specific ...[2024-07-10 14:34:34.659639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.547 [2024-07-10 14:34:34.659911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:22.547 [2024-07-10 14:34:34.660147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.547 [2024-07-10 14:34:34.660252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:22.547 [2024-07-10 14:34:34.660482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.547 [2024-07-10 14:34:34.660585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:22.547 [2024-07-10 14:34:34.660826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.547 [2024-07-10 14:34:34.660927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:22.547 passed 00:16:22.547 Test: blockdev nvme admin passthru ...passed 00:16:22.547 Test: blockdev copy ...passed 00:16:22.547 00:16:22.547 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.547 suites 1 1 n/a 0 0 00:16:22.547 tests 23 23 23 0 0 00:16:22.547 asserts 152 152 152 0 n/a 00:16:22.547 00:16:22.547 Elapsed time = 0.893 seconds 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.806 rmmod nvme_tcp 00:16:22.806 rmmod nvme_fabrics 00:16:22.806 rmmod nvme_keyring 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 94272 ']' 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 94272 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 94272 ']' 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 94272 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94272 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94272' 00:16:22.806 killing process with pid 94272 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 94272 00:16:22.806 14:34:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 94272 00:16:23.066 14:34:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:23.066 14:34:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:23.066 14:34:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:23.066 14:34:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.066 14:34:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:23.066 14:34:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.066 14:34:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.066 14:34:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.066 14:34:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:23.066 00:16:23.066 real 0m2.303s 00:16:23.066 user 0m7.523s 00:16:23.066 sys 0m0.682s 00:16:23.066 14:34:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.066 14:34:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.066 ************************************ 00:16:23.066 END TEST nvmf_bdevio 00:16:23.066 ************************************ 00:16:23.066 14:34:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:23.066 14:34:35 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:23.066 14:34:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:23.066 14:34:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.066 14:34:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:23.066 ************************************ 00:16:23.066 START TEST nvmf_auth_target 00:16:23.066 ************************************ 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:23.066 * Looking for test storage... 00:16:23.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.066 14:34:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.067 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:23.326 Cannot find device "nvmf_tgt_br" 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.326 Cannot find device "nvmf_tgt_br2" 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:23.326 Cannot find device "nvmf_tgt_br" 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:23.326 Cannot find device "nvmf_tgt_br2" 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.326 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:23.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:16:23.584 00:16:23.584 --- 10.0.0.2 ping statistics --- 00:16:23.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.584 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:23.584 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.584 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:16:23.584 00:16:23.584 --- 10.0.0.3 ping statistics --- 00:16:23.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.584 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:23.584 00:16:23.584 --- 10.0.0.1 ping statistics --- 00:16:23.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.584 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=94486 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 94486 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 94486 ']' 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.584 14:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.843 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.843 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=94516 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5e661939db379554da10c398917db07ffb3a4cefae1d7acb 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Wye 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5e661939db379554da10c398917db07ffb3a4cefae1d7acb 0 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5e661939db379554da10c398917db07ffb3a4cefae1d7acb 0 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5e661939db379554da10c398917db07ffb3a4cefae1d7acb 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Wye 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Wye 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Wye 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d5ef99540e9eaf3742f5121a1abb9f185fd237685592331cab2bef58165f6a75 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lZ6 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d5ef99540e9eaf3742f5121a1abb9f185fd237685592331cab2bef58165f6a75 3 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d5ef99540e9eaf3742f5121a1abb9f185fd237685592331cab2bef58165f6a75 3 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d5ef99540e9eaf3742f5121a1abb9f185fd237685592331cab2bef58165f6a75 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:23.844 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lZ6 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lZ6 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.lZ6 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f4decc6bd666ad74f6ac33c5a19d25eb 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.CEY 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f4decc6bd666ad74f6ac33c5a19d25eb 1 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f4decc6bd666ad74f6ac33c5a19d25eb 1 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f4decc6bd666ad74f6ac33c5a19d25eb 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.CEY 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.CEY 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.CEY 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d9b6b9cf1f6db32bcf16daf3eb807ecc66b5defa63af32a5 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wv0 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d9b6b9cf1f6db32bcf16daf3eb807ecc66b5defa63af32a5 2 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d9b6b9cf1f6db32bcf16daf3eb807ecc66b5defa63af32a5 2 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d9b6b9cf1f6db32bcf16daf3eb807ecc66b5defa63af32a5 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wv0 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wv0 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.wv0 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.103 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ae92f8b84e4e2a0f8591861e8e72fe0c65ef3f913b97297c 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZEG 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ae92f8b84e4e2a0f8591861e8e72fe0c65ef3f913b97297c 2 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ae92f8b84e4e2a0f8591861e8e72fe0c65ef3f913b97297c 2 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ae92f8b84e4e2a0f8591861e8e72fe0c65ef3f913b97297c 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:24.104 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZEG 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZEG 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ZEG 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bd41e30f2dcd77ee7ae5f2481716b55a 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.e0d 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bd41e30f2dcd77ee7ae5f2481716b55a 1 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bd41e30f2dcd77ee7ae5f2481716b55a 1 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bd41e30f2dcd77ee7ae5f2481716b55a 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.e0d 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.e0d 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.e0d 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=72955b1591f76b752c2e3d880195f6793a4b7d71e83e3acb97df15a9b76dfcd3 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ghB 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 72955b1591f76b752c2e3d880195f6793a4b7d71e83e3acb97df15a9b76dfcd3 3 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 72955b1591f76b752c2e3d880195f6793a4b7d71e83e3acb97df15a9b76dfcd3 3 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=72955b1591f76b752c2e3d880195f6793a4b7d71e83e3acb97df15a9b76dfcd3 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ghB 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ghB 00:16:24.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ghB 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 94486 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 94486 ']' 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.362 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:24.622 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.622 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:24.622 14:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 94516 /var/tmp/host.sock 00:16:24.622 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 94516 ']' 00:16:24.622 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:24.622 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.622 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:24.622 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.622 14:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.881 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.881 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:24.881 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:24.881 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.881 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.881 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.881 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:24.881 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Wye 00:16:24.881 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.881 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.881 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.881 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Wye 00:16:24.881 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Wye 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.lZ6 ]] 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lZ6 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lZ6 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lZ6 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.CEY 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.CEY 00:16:25.449 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.CEY 00:16:25.707 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.wv0 ]] 00:16:25.707 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wv0 00:16:25.707 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.707 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.707 14:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.707 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wv0 00:16:25.707 14:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wv0 00:16:25.966 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:25.966 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZEG 00:16:25.966 14:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.966 14:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.966 14:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.966 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ZEG 00:16:25.966 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ZEG 00:16:26.224 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.e0d ]] 00:16:26.224 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.e0d 00:16:26.224 14:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.224 14:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.224 14:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.224 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.e0d 00:16:26.224 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.e0d 00:16:26.483 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:26.483 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ghB 00:16:26.483 14:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.483 14:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.483 14:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.483 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ghB 00:16:26.483 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ghB 00:16:26.742 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:26.742 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:26.742 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.742 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.742 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:26.742 14:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.000 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:27.000 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.000 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.000 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:27.000 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:27.000 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.000 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.000 14:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.000 14:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.259 14:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.259 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.259 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.518 00:16:27.518 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.518 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.518 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.776 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.776 14:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.776 14:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.776 14:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.776 14:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.776 14:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.776 { 00:16:27.776 "auth": { 00:16:27.776 "dhgroup": "null", 00:16:27.776 "digest": "sha256", 00:16:27.776 "state": "completed" 00:16:27.776 }, 00:16:27.776 "cntlid": 1, 00:16:27.776 "listen_address": { 00:16:27.776 "adrfam": "IPv4", 00:16:27.776 "traddr": "10.0.0.2", 00:16:27.776 "trsvcid": "4420", 00:16:27.776 "trtype": "TCP" 00:16:27.776 }, 00:16:27.776 "peer_address": { 00:16:27.776 "adrfam": "IPv4", 00:16:27.776 "traddr": "10.0.0.1", 00:16:27.776 "trsvcid": "44868", 00:16:27.776 "trtype": "TCP" 00:16:27.776 }, 00:16:27.776 "qid": 0, 00:16:27.776 "state": "enabled", 00:16:27.776 "thread": "nvmf_tgt_poll_group_000" 00:16:27.776 } 00:16:27.776 ]' 00:16:27.776 14:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.776 14:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.776 14:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.035 14:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:28.035 14:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.035 14:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.035 14:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.035 14:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.292 14:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.561 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.561 14:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.819 14:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.819 14:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.819 14:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.819 14:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.819 14:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.819 14:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.819 { 00:16:33.819 "auth": { 00:16:33.819 "dhgroup": "null", 00:16:33.819 "digest": "sha256", 00:16:33.819 "state": "completed" 00:16:33.819 }, 00:16:33.819 "cntlid": 3, 00:16:33.819 "listen_address": { 00:16:33.819 "adrfam": "IPv4", 00:16:33.819 "traddr": "10.0.0.2", 00:16:33.819 "trsvcid": "4420", 00:16:33.819 "trtype": "TCP" 00:16:33.819 }, 00:16:33.819 "peer_address": { 00:16:33.819 "adrfam": "IPv4", 00:16:33.819 "traddr": "10.0.0.1", 00:16:33.819 "trsvcid": "39802", 00:16:33.819 "trtype": "TCP" 00:16:33.819 }, 00:16:33.819 "qid": 0, 00:16:33.819 "state": "enabled", 00:16:33.819 "thread": "nvmf_tgt_poll_group_000" 00:16:33.819 } 00:16:33.819 ]' 00:16:33.819 14:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.819 14:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.819 14:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.077 14:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:34.077 14:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.077 14:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.077 14:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.077 14:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.335 14:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.268 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.833 00:16:35.833 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.833 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.833 14:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.090 14:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.090 14:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.090 14:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.090 14:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.090 14:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.090 14:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.090 { 00:16:36.090 "auth": { 00:16:36.090 "dhgroup": "null", 00:16:36.090 "digest": "sha256", 00:16:36.090 "state": "completed" 00:16:36.090 }, 00:16:36.090 "cntlid": 5, 00:16:36.090 "listen_address": { 00:16:36.090 "adrfam": "IPv4", 00:16:36.090 "traddr": "10.0.0.2", 00:16:36.090 "trsvcid": "4420", 00:16:36.090 "trtype": "TCP" 00:16:36.090 }, 00:16:36.090 "peer_address": { 00:16:36.090 "adrfam": "IPv4", 00:16:36.090 "traddr": "10.0.0.1", 00:16:36.090 "trsvcid": "39828", 00:16:36.090 "trtype": "TCP" 00:16:36.090 }, 00:16:36.090 "qid": 0, 00:16:36.090 "state": "enabled", 00:16:36.090 "thread": "nvmf_tgt_poll_group_000" 00:16:36.090 } 00:16:36.090 ]' 00:16:36.090 14:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.090 14:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.090 14:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.090 14:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:36.090 14:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.347 14:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.347 14:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.347 14:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.604 14:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:16:37.167 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.167 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:37.167 14:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.167 14:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.167 14:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.167 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.167 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.167 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.424 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:37.424 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.424 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.424 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:37.424 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:37.424 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.424 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:16:37.424 14:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.424 14:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.424 14:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.424 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.424 14:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.990 00:16:37.990 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.990 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.990 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.990 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.990 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.990 14:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.990 14:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.272 14:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.272 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.272 { 00:16:38.272 "auth": { 00:16:38.272 "dhgroup": "null", 00:16:38.272 "digest": "sha256", 00:16:38.272 "state": "completed" 00:16:38.272 }, 00:16:38.272 "cntlid": 7, 00:16:38.272 "listen_address": { 00:16:38.272 "adrfam": "IPv4", 00:16:38.272 "traddr": "10.0.0.2", 00:16:38.272 "trsvcid": "4420", 00:16:38.272 "trtype": "TCP" 00:16:38.272 }, 00:16:38.272 "peer_address": { 00:16:38.272 "adrfam": "IPv4", 00:16:38.272 "traddr": "10.0.0.1", 00:16:38.272 "trsvcid": "39862", 00:16:38.272 "trtype": "TCP" 00:16:38.272 }, 00:16:38.272 "qid": 0, 00:16:38.272 "state": "enabled", 00:16:38.272 "thread": "nvmf_tgt_poll_group_000" 00:16:38.272 } 00:16:38.272 ]' 00:16:38.272 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.272 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.272 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.272 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:38.272 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.272 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.272 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.272 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.538 14:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:16:39.470 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.470 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:39.470 14:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.470 14:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.470 14:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.470 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.470 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.470 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.470 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.728 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:39.728 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.728 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:39.728 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:39.728 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:39.728 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.728 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.728 14:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.728 14:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.728 14:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.728 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.729 14:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.987 00:16:39.987 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.987 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.987 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.245 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.245 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.245 14:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.245 14:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.245 14:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.245 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.245 { 00:16:40.245 "auth": { 00:16:40.245 "dhgroup": "ffdhe2048", 00:16:40.245 "digest": "sha256", 00:16:40.245 "state": "completed" 00:16:40.245 }, 00:16:40.245 "cntlid": 9, 00:16:40.245 "listen_address": { 00:16:40.245 "adrfam": "IPv4", 00:16:40.245 "traddr": "10.0.0.2", 00:16:40.245 "trsvcid": "4420", 00:16:40.246 "trtype": "TCP" 00:16:40.246 }, 00:16:40.246 "peer_address": { 00:16:40.246 "adrfam": "IPv4", 00:16:40.246 "traddr": "10.0.0.1", 00:16:40.246 "trsvcid": "39886", 00:16:40.246 "trtype": "TCP" 00:16:40.246 }, 00:16:40.246 "qid": 0, 00:16:40.246 "state": "enabled", 00:16:40.246 "thread": "nvmf_tgt_poll_group_000" 00:16:40.246 } 00:16:40.246 ]' 00:16:40.246 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.246 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.246 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.504 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.504 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.504 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.504 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.504 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.764 14:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.699 14:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.266 00:16:42.266 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.266 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.266 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.525 { 00:16:42.525 "auth": { 00:16:42.525 "dhgroup": "ffdhe2048", 00:16:42.525 "digest": "sha256", 00:16:42.525 "state": "completed" 00:16:42.525 }, 00:16:42.525 "cntlid": 11, 00:16:42.525 "listen_address": { 00:16:42.525 "adrfam": "IPv4", 00:16:42.525 "traddr": "10.0.0.2", 00:16:42.525 "trsvcid": "4420", 00:16:42.525 "trtype": "TCP" 00:16:42.525 }, 00:16:42.525 "peer_address": { 00:16:42.525 "adrfam": "IPv4", 00:16:42.525 "traddr": "10.0.0.1", 00:16:42.525 "trsvcid": "47120", 00:16:42.525 "trtype": "TCP" 00:16:42.525 }, 00:16:42.525 "qid": 0, 00:16:42.525 "state": "enabled", 00:16:42.525 "thread": "nvmf_tgt_poll_group_000" 00:16:42.525 } 00:16:42.525 ]' 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.525 14:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.784 14:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:16:43.719 14:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.719 14:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:43.719 14:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.719 14:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.719 14:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.719 14:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.719 14:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.719 14:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.981 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:43.981 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.981 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.981 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:43.981 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:43.981 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.981 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.981 14:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.981 14:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.981 14:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.981 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.981 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.240 00:16:44.240 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.240 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.240 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.533 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.533 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.533 14:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.533 14:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.533 14:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.533 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.533 { 00:16:44.533 "auth": { 00:16:44.533 "dhgroup": "ffdhe2048", 00:16:44.533 "digest": "sha256", 00:16:44.533 "state": "completed" 00:16:44.533 }, 00:16:44.533 "cntlid": 13, 00:16:44.533 "listen_address": { 00:16:44.533 "adrfam": "IPv4", 00:16:44.533 "traddr": "10.0.0.2", 00:16:44.533 "trsvcid": "4420", 00:16:44.533 "trtype": "TCP" 00:16:44.533 }, 00:16:44.533 "peer_address": { 00:16:44.533 "adrfam": "IPv4", 00:16:44.533 "traddr": "10.0.0.1", 00:16:44.533 "trsvcid": "47156", 00:16:44.533 "trtype": "TCP" 00:16:44.533 }, 00:16:44.533 "qid": 0, 00:16:44.533 "state": "enabled", 00:16:44.533 "thread": "nvmf_tgt_poll_group_000" 00:16:44.533 } 00:16:44.533 ]' 00:16:44.533 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.819 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.819 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.819 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:44.819 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.819 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.819 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.819 14:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.078 14:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:16:46.012 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.012 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:46.012 14:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.012 14:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.012 14:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.012 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.012 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.012 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.270 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:46.270 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.270 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.270 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:46.270 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.270 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.270 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:16:46.270 14:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.271 14:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.271 14:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.271 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.271 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.529 00:16:46.529 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.529 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.529 14:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.786 14:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.786 14:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.787 14:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.787 14:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.787 14:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.787 14:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.787 { 00:16:46.787 "auth": { 00:16:46.787 "dhgroup": "ffdhe2048", 00:16:46.787 "digest": "sha256", 00:16:46.787 "state": "completed" 00:16:46.787 }, 00:16:46.787 "cntlid": 15, 00:16:46.787 "listen_address": { 00:16:46.787 "adrfam": "IPv4", 00:16:46.787 "traddr": "10.0.0.2", 00:16:46.787 "trsvcid": "4420", 00:16:46.787 "trtype": "TCP" 00:16:46.787 }, 00:16:46.787 "peer_address": { 00:16:46.787 "adrfam": "IPv4", 00:16:46.787 "traddr": "10.0.0.1", 00:16:46.787 "trsvcid": "47186", 00:16:46.787 "trtype": "TCP" 00:16:46.787 }, 00:16:46.787 "qid": 0, 00:16:46.787 "state": "enabled", 00:16:46.787 "thread": "nvmf_tgt_poll_group_000" 00:16:46.787 } 00:16:46.787 ]' 00:16:46.787 14:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.044 14:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.044 14:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.044 14:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.044 14:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.044 14:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.044 14:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.044 14:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.302 14:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:16:48.237 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.237 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:48.237 14:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.237 14:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.237 14:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.237 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.237 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.237 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.237 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.494 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:48.494 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.494 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:48.494 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:48.494 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:48.494 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.494 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.494 14:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.494 14:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.494 14:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.494 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.494 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.752 00:16:48.752 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.752 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.752 14:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.011 14:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.011 14:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.011 14:35:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.011 14:35:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.011 14:35:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.011 14:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.011 { 00:16:49.011 "auth": { 00:16:49.011 "dhgroup": "ffdhe3072", 00:16:49.011 "digest": "sha256", 00:16:49.011 "state": "completed" 00:16:49.011 }, 00:16:49.011 "cntlid": 17, 00:16:49.011 "listen_address": { 00:16:49.011 "adrfam": "IPv4", 00:16:49.011 "traddr": "10.0.0.2", 00:16:49.011 "trsvcid": "4420", 00:16:49.011 "trtype": "TCP" 00:16:49.011 }, 00:16:49.011 "peer_address": { 00:16:49.011 "adrfam": "IPv4", 00:16:49.011 "traddr": "10.0.0.1", 00:16:49.011 "trsvcid": "47214", 00:16:49.011 "trtype": "TCP" 00:16:49.011 }, 00:16:49.011 "qid": 0, 00:16:49.011 "state": "enabled", 00:16:49.011 "thread": "nvmf_tgt_poll_group_000" 00:16:49.011 } 00:16:49.011 ]' 00:16:49.011 14:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.269 14:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.269 14:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.269 14:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.269 14:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.269 14:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.269 14:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.269 14:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.528 14:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:16:50.462 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.462 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:50.462 14:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.462 14:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.462 14:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.462 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.462 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.462 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.462 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:50.463 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.463 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.463 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:50.463 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:50.463 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.463 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.463 14:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.463 14:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.463 14:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.463 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.463 14:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.029 00:16:51.029 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.029 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.029 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.287 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.287 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.287 14:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.287 14:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.287 14:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.287 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.287 { 00:16:51.287 "auth": { 00:16:51.287 "dhgroup": "ffdhe3072", 00:16:51.287 "digest": "sha256", 00:16:51.287 "state": "completed" 00:16:51.287 }, 00:16:51.287 "cntlid": 19, 00:16:51.287 "listen_address": { 00:16:51.287 "adrfam": "IPv4", 00:16:51.287 "traddr": "10.0.0.2", 00:16:51.287 "trsvcid": "4420", 00:16:51.287 "trtype": "TCP" 00:16:51.287 }, 00:16:51.287 "peer_address": { 00:16:51.287 "adrfam": "IPv4", 00:16:51.287 "traddr": "10.0.0.1", 00:16:51.287 "trsvcid": "47236", 00:16:51.287 "trtype": "TCP" 00:16:51.287 }, 00:16:51.287 "qid": 0, 00:16:51.287 "state": "enabled", 00:16:51.287 "thread": "nvmf_tgt_poll_group_000" 00:16:51.287 } 00:16:51.287 ]' 00:16:51.287 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.287 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.287 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.545 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.545 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.545 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.545 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.545 14:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.803 14:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:16:52.739 14:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.739 14:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:52.739 14:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.739 14:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.739 14:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.739 14:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.739 14:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.739 14:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.998 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:52.998 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.998 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.998 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:52.998 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.998 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.998 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.998 14:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.998 14:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.998 14:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.998 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.998 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.256 00:16:53.256 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.256 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.256 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.824 { 00:16:53.824 "auth": { 00:16:53.824 "dhgroup": "ffdhe3072", 00:16:53.824 "digest": "sha256", 00:16:53.824 "state": "completed" 00:16:53.824 }, 00:16:53.824 "cntlid": 21, 00:16:53.824 "listen_address": { 00:16:53.824 "adrfam": "IPv4", 00:16:53.824 "traddr": "10.0.0.2", 00:16:53.824 "trsvcid": "4420", 00:16:53.824 "trtype": "TCP" 00:16:53.824 }, 00:16:53.824 "peer_address": { 00:16:53.824 "adrfam": "IPv4", 00:16:53.824 "traddr": "10.0.0.1", 00:16:53.824 "trsvcid": "57954", 00:16:53.824 "trtype": "TCP" 00:16:53.824 }, 00:16:53.824 "qid": 0, 00:16:53.824 "state": "enabled", 00:16:53.824 "thread": "nvmf_tgt_poll_group_000" 00:16:53.824 } 00:16:53.824 ]' 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.824 14:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.082 14:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:16:55.019 14:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.019 14:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:55.019 14:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.019 14:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.019 14:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.019 14:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.019 14:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.019 14:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.019 14:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:55.019 14:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.019 14:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.019 14:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:55.019 14:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:55.019 14:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.019 14:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:16:55.019 14:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.019 14:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.019 14:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.019 14:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.019 14:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.587 00:16:55.587 14:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.587 14:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.587 14:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.845 14:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.845 14:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.845 14:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.845 14:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.845 14:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.845 14:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.845 { 00:16:55.845 "auth": { 00:16:55.845 "dhgroup": "ffdhe3072", 00:16:55.845 "digest": "sha256", 00:16:55.845 "state": "completed" 00:16:55.845 }, 00:16:55.845 "cntlid": 23, 00:16:55.845 "listen_address": { 00:16:55.845 "adrfam": "IPv4", 00:16:55.845 "traddr": "10.0.0.2", 00:16:55.845 "trsvcid": "4420", 00:16:55.845 "trtype": "TCP" 00:16:55.845 }, 00:16:55.845 "peer_address": { 00:16:55.845 "adrfam": "IPv4", 00:16:55.845 "traddr": "10.0.0.1", 00:16:55.845 "trsvcid": "57974", 00:16:55.845 "trtype": "TCP" 00:16:55.845 }, 00:16:55.845 "qid": 0, 00:16:55.845 "state": "enabled", 00:16:55.845 "thread": "nvmf_tgt_poll_group_000" 00:16:55.845 } 00:16:55.845 ]' 00:16:55.845 14:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.845 14:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.845 14:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.845 14:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:55.845 14:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.103 14:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.103 14:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.103 14:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.362 14:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:16:56.939 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.939 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:56.939 14:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.939 14:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.939 14:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.939 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.939 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.939 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.939 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.197 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:57.197 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.197 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.197 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:57.197 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.197 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.197 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.197 14:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.197 14:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.454 14:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.454 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.454 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.713 00:16:57.713 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.713 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.713 14:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.971 14:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.971 14:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.971 14:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.971 14:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.971 14:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.971 14:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.971 { 00:16:57.971 "auth": { 00:16:57.971 "dhgroup": "ffdhe4096", 00:16:57.971 "digest": "sha256", 00:16:57.971 "state": "completed" 00:16:57.971 }, 00:16:57.971 "cntlid": 25, 00:16:57.971 "listen_address": { 00:16:57.971 "adrfam": "IPv4", 00:16:57.971 "traddr": "10.0.0.2", 00:16:57.971 "trsvcid": "4420", 00:16:57.971 "trtype": "TCP" 00:16:57.971 }, 00:16:57.971 "peer_address": { 00:16:57.971 "adrfam": "IPv4", 00:16:57.971 "traddr": "10.0.0.1", 00:16:57.971 "trsvcid": "58004", 00:16:57.971 "trtype": "TCP" 00:16:57.971 }, 00:16:57.971 "qid": 0, 00:16:57.971 "state": "enabled", 00:16:57.971 "thread": "nvmf_tgt_poll_group_000" 00:16:57.971 } 00:16:57.971 ]' 00:16:57.971 14:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.229 14:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.230 14:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.230 14:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.230 14:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.230 14:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.230 14:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.230 14:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.488 14:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.421 14:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.679 14:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.680 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.680 14:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.938 00:16:59.938 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.938 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.938 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.196 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.196 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.196 14:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.196 14:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.196 14:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.196 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.196 { 00:17:00.196 "auth": { 00:17:00.196 "dhgroup": "ffdhe4096", 00:17:00.196 "digest": "sha256", 00:17:00.196 "state": "completed" 00:17:00.196 }, 00:17:00.196 "cntlid": 27, 00:17:00.196 "listen_address": { 00:17:00.196 "adrfam": "IPv4", 00:17:00.196 "traddr": "10.0.0.2", 00:17:00.196 "trsvcid": "4420", 00:17:00.196 "trtype": "TCP" 00:17:00.196 }, 00:17:00.196 "peer_address": { 00:17:00.196 "adrfam": "IPv4", 00:17:00.196 "traddr": "10.0.0.1", 00:17:00.196 "trsvcid": "58028", 00:17:00.196 "trtype": "TCP" 00:17:00.196 }, 00:17:00.196 "qid": 0, 00:17:00.196 "state": "enabled", 00:17:00.196 "thread": "nvmf_tgt_poll_group_000" 00:17:00.196 } 00:17:00.196 ]' 00:17:00.196 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.456 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.456 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.456 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:00.456 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.456 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.456 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.456 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.715 14:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.649 14:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.215 00:17:02.215 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.215 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.215 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.473 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.474 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.474 14:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.474 14:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.474 14:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.474 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.474 { 00:17:02.474 "auth": { 00:17:02.474 "dhgroup": "ffdhe4096", 00:17:02.474 "digest": "sha256", 00:17:02.474 "state": "completed" 00:17:02.474 }, 00:17:02.474 "cntlid": 29, 00:17:02.474 "listen_address": { 00:17:02.474 "adrfam": "IPv4", 00:17:02.474 "traddr": "10.0.0.2", 00:17:02.474 "trsvcid": "4420", 00:17:02.474 "trtype": "TCP" 00:17:02.474 }, 00:17:02.474 "peer_address": { 00:17:02.474 "adrfam": "IPv4", 00:17:02.474 "traddr": "10.0.0.1", 00:17:02.474 "trsvcid": "49712", 00:17:02.474 "trtype": "TCP" 00:17:02.474 }, 00:17:02.474 "qid": 0, 00:17:02.474 "state": "enabled", 00:17:02.474 "thread": "nvmf_tgt_poll_group_000" 00:17:02.474 } 00:17:02.474 ]' 00:17:02.474 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.474 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.474 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.474 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.474 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.732 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.732 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.732 14:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.991 14:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:17:03.925 14:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.925 14:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:03.925 14:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.925 14:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.925 14:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.925 14:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.925 14:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:03.925 14:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:03.925 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:03.925 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.925 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.925 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:03.925 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.926 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.926 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:17:03.926 14:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.926 14:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.926 14:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.926 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.926 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.489 00:17:04.489 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.489 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.489 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.746 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.746 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.746 14:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.746 14:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.746 14:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.746 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.746 { 00:17:04.746 "auth": { 00:17:04.746 "dhgroup": "ffdhe4096", 00:17:04.747 "digest": "sha256", 00:17:04.747 "state": "completed" 00:17:04.747 }, 00:17:04.747 "cntlid": 31, 00:17:04.747 "listen_address": { 00:17:04.747 "adrfam": "IPv4", 00:17:04.747 "traddr": "10.0.0.2", 00:17:04.747 "trsvcid": "4420", 00:17:04.747 "trtype": "TCP" 00:17:04.747 }, 00:17:04.747 "peer_address": { 00:17:04.747 "adrfam": "IPv4", 00:17:04.747 "traddr": "10.0.0.1", 00:17:04.747 "trsvcid": "49734", 00:17:04.747 "trtype": "TCP" 00:17:04.747 }, 00:17:04.747 "qid": 0, 00:17:04.747 "state": "enabled", 00:17:04.747 "thread": "nvmf_tgt_poll_group_000" 00:17:04.747 } 00:17:04.747 ]' 00:17:04.747 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.747 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.747 14:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.747 14:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:04.747 14:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.004 14:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.004 14:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.004 14:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.262 14:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:17:05.828 14:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.828 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:05.828 14:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.828 14:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.828 14:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.828 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.828 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.828 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.828 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.086 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:06.086 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.086 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:06.086 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:06.086 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:06.086 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.086 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.086 14:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.086 14:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.086 14:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.086 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.086 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.651 00:17:06.651 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.651 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.651 14:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.909 14:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.909 14:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.909 14:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.909 14:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.909 14:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.910 14:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.910 { 00:17:06.910 "auth": { 00:17:06.910 "dhgroup": "ffdhe6144", 00:17:06.910 "digest": "sha256", 00:17:06.910 "state": "completed" 00:17:06.910 }, 00:17:06.910 "cntlid": 33, 00:17:06.910 "listen_address": { 00:17:06.910 "adrfam": "IPv4", 00:17:06.910 "traddr": "10.0.0.2", 00:17:06.910 "trsvcid": "4420", 00:17:06.910 "trtype": "TCP" 00:17:06.910 }, 00:17:06.910 "peer_address": { 00:17:06.910 "adrfam": "IPv4", 00:17:06.910 "traddr": "10.0.0.1", 00:17:06.910 "trsvcid": "49762", 00:17:06.910 "trtype": "TCP" 00:17:06.910 }, 00:17:06.910 "qid": 0, 00:17:06.910 "state": "enabled", 00:17:06.910 "thread": "nvmf_tgt_poll_group_000" 00:17:06.910 } 00:17:06.910 ]' 00:17:06.910 14:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.910 14:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.910 14:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.910 14:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.910 14:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.910 14:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.910 14:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.910 14:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.476 14:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:17:08.043 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.043 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:08.043 14:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.043 14:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.043 14:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.043 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.043 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.043 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.302 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:08.302 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.302 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:08.302 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:08.302 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:08.302 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.302 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.302 14:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.302 14:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.302 14:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.302 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.302 14:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.868 00:17:08.868 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.868 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.868 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.126 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.126 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.126 14:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.126 14:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.126 14:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.126 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.126 { 00:17:09.126 "auth": { 00:17:09.126 "dhgroup": "ffdhe6144", 00:17:09.126 "digest": "sha256", 00:17:09.126 "state": "completed" 00:17:09.126 }, 00:17:09.126 "cntlid": 35, 00:17:09.126 "listen_address": { 00:17:09.126 "adrfam": "IPv4", 00:17:09.126 "traddr": "10.0.0.2", 00:17:09.126 "trsvcid": "4420", 00:17:09.126 "trtype": "TCP" 00:17:09.126 }, 00:17:09.126 "peer_address": { 00:17:09.126 "adrfam": "IPv4", 00:17:09.126 "traddr": "10.0.0.1", 00:17:09.126 "trsvcid": "49800", 00:17:09.126 "trtype": "TCP" 00:17:09.126 }, 00:17:09.126 "qid": 0, 00:17:09.126 "state": "enabled", 00:17:09.126 "thread": "nvmf_tgt_poll_group_000" 00:17:09.126 } 00:17:09.126 ]' 00:17:09.126 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.126 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.126 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.384 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.384 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.384 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.384 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.384 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.642 14:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.576 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.577 14:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.143 00:17:11.143 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.143 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.144 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.401 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.401 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.401 14:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.401 14:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.401 14:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.401 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.401 { 00:17:11.401 "auth": { 00:17:11.401 "dhgroup": "ffdhe6144", 00:17:11.401 "digest": "sha256", 00:17:11.401 "state": "completed" 00:17:11.401 }, 00:17:11.401 "cntlid": 37, 00:17:11.401 "listen_address": { 00:17:11.401 "adrfam": "IPv4", 00:17:11.401 "traddr": "10.0.0.2", 00:17:11.401 "trsvcid": "4420", 00:17:11.401 "trtype": "TCP" 00:17:11.401 }, 00:17:11.401 "peer_address": { 00:17:11.401 "adrfam": "IPv4", 00:17:11.401 "traddr": "10.0.0.1", 00:17:11.401 "trsvcid": "49830", 00:17:11.401 "trtype": "TCP" 00:17:11.401 }, 00:17:11.401 "qid": 0, 00:17:11.401 "state": "enabled", 00:17:11.401 "thread": "nvmf_tgt_poll_group_000" 00:17:11.401 } 00:17:11.401 ]' 00:17:11.401 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.659 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.659 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.659 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.659 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.659 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.659 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.659 14:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.918 14:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:17:12.855 14:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.855 14:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:12.855 14:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.855 14:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.855 14:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.855 14:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.855 14:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:12.855 14:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:13.114 14:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:13.114 14:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.114 14:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.114 14:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:13.114 14:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:13.114 14:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.114 14:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:17:13.114 14:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.114 14:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.114 14:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.114 14:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.114 14:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.682 00:17:13.682 14:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.682 14:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.682 14:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.941 { 00:17:13.941 "auth": { 00:17:13.941 "dhgroup": "ffdhe6144", 00:17:13.941 "digest": "sha256", 00:17:13.941 "state": "completed" 00:17:13.941 }, 00:17:13.941 "cntlid": 39, 00:17:13.941 "listen_address": { 00:17:13.941 "adrfam": "IPv4", 00:17:13.941 "traddr": "10.0.0.2", 00:17:13.941 "trsvcid": "4420", 00:17:13.941 "trtype": "TCP" 00:17:13.941 }, 00:17:13.941 "peer_address": { 00:17:13.941 "adrfam": "IPv4", 00:17:13.941 "traddr": "10.0.0.1", 00:17:13.941 "trsvcid": "42222", 00:17:13.941 "trtype": "TCP" 00:17:13.941 }, 00:17:13.941 "qid": 0, 00:17:13.941 "state": "enabled", 00:17:13.941 "thread": "nvmf_tgt_poll_group_000" 00:17:13.941 } 00:17:13.941 ]' 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.941 14:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.199 14:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:17:15.158 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.158 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:15.158 14:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.158 14:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.158 14:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.158 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.158 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.158 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.158 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.416 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:15.416 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.416 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.416 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:15.416 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:15.416 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.416 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.416 14:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.416 14:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.416 14:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.416 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.416 14:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.983 00:17:15.983 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.983 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.983 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.242 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.242 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.242 14:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.242 14:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.242 14:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.242 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.242 { 00:17:16.242 "auth": { 00:17:16.242 "dhgroup": "ffdhe8192", 00:17:16.242 "digest": "sha256", 00:17:16.242 "state": "completed" 00:17:16.242 }, 00:17:16.242 "cntlid": 41, 00:17:16.242 "listen_address": { 00:17:16.242 "adrfam": "IPv4", 00:17:16.242 "traddr": "10.0.0.2", 00:17:16.242 "trsvcid": "4420", 00:17:16.242 "trtype": "TCP" 00:17:16.242 }, 00:17:16.242 "peer_address": { 00:17:16.242 "adrfam": "IPv4", 00:17:16.242 "traddr": "10.0.0.1", 00:17:16.242 "trsvcid": "42254", 00:17:16.242 "trtype": "TCP" 00:17:16.242 }, 00:17:16.242 "qid": 0, 00:17:16.242 "state": "enabled", 00:17:16.242 "thread": "nvmf_tgt_poll_group_000" 00:17:16.242 } 00:17:16.242 ]' 00:17:16.242 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.242 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.242 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.501 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.501 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.501 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.501 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.501 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.760 14:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:17:17.694 14:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.695 14:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:17.695 14:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.695 14:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.695 14:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.695 14:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.695 14:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.695 14:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.953 14:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:17.953 14:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.953 14:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.953 14:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:17.953 14:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:17.953 14:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.953 14:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.953 14:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.953 14:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.953 14:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.953 14:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.953 14:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.520 00:17:18.520 14:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.520 14:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.520 14:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.779 14:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.779 14:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.779 14:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.779 14:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.779 14:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.779 14:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.779 { 00:17:18.779 "auth": { 00:17:18.779 "dhgroup": "ffdhe8192", 00:17:18.779 "digest": "sha256", 00:17:18.779 "state": "completed" 00:17:18.779 }, 00:17:18.779 "cntlid": 43, 00:17:18.779 "listen_address": { 00:17:18.779 "adrfam": "IPv4", 00:17:18.779 "traddr": "10.0.0.2", 00:17:18.779 "trsvcid": "4420", 00:17:18.779 "trtype": "TCP" 00:17:18.779 }, 00:17:18.779 "peer_address": { 00:17:18.779 "adrfam": "IPv4", 00:17:18.779 "traddr": "10.0.0.1", 00:17:18.779 "trsvcid": "42272", 00:17:18.779 "trtype": "TCP" 00:17:18.779 }, 00:17:18.779 "qid": 0, 00:17:18.779 "state": "enabled", 00:17:18.779 "thread": "nvmf_tgt_poll_group_000" 00:17:18.779 } 00:17:18.779 ]' 00:17:18.779 14:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.037 14:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.037 14:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.037 14:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.037 14:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.037 14:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.037 14:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.037 14:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.308 14:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:17:19.926 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.926 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:19.926 14:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.926 14:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.926 14:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.926 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.926 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.926 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:20.182 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:20.182 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.182 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:20.182 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:20.182 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:20.182 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.182 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.182 14:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.182 14:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.182 14:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.182 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.182 14:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.114 00:17:21.114 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.114 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.114 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.371 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.371 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.372 14:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.372 14:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.372 14:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.372 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.372 { 00:17:21.372 "auth": { 00:17:21.372 "dhgroup": "ffdhe8192", 00:17:21.372 "digest": "sha256", 00:17:21.372 "state": "completed" 00:17:21.372 }, 00:17:21.372 "cntlid": 45, 00:17:21.372 "listen_address": { 00:17:21.372 "adrfam": "IPv4", 00:17:21.372 "traddr": "10.0.0.2", 00:17:21.372 "trsvcid": "4420", 00:17:21.372 "trtype": "TCP" 00:17:21.372 }, 00:17:21.372 "peer_address": { 00:17:21.372 "adrfam": "IPv4", 00:17:21.372 "traddr": "10.0.0.1", 00:17:21.372 "trsvcid": "42292", 00:17:21.372 "trtype": "TCP" 00:17:21.372 }, 00:17:21.372 "qid": 0, 00:17:21.372 "state": "enabled", 00:17:21.372 "thread": "nvmf_tgt_poll_group_000" 00:17:21.372 } 00:17:21.372 ]' 00:17:21.372 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.372 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.372 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.372 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.372 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.372 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.372 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.372 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.936 14:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:17:22.498 14:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.756 14:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:22.756 14:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.756 14:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.756 14:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.756 14:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.756 14:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.756 14:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:23.013 14:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:23.013 14:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.013 14:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:23.013 14:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:23.013 14:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:23.014 14:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.014 14:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:17:23.014 14:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.014 14:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.014 14:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.014 14:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.014 14:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.581 00:17:23.581 14:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.581 14:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.581 14:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.865 14:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.865 14:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.865 14:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.865 14:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.865 14:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.865 14:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.865 { 00:17:23.865 "auth": { 00:17:23.865 "dhgroup": "ffdhe8192", 00:17:23.865 "digest": "sha256", 00:17:23.865 "state": "completed" 00:17:23.865 }, 00:17:23.866 "cntlid": 47, 00:17:23.866 "listen_address": { 00:17:23.866 "adrfam": "IPv4", 00:17:23.866 "traddr": "10.0.0.2", 00:17:23.866 "trsvcid": "4420", 00:17:23.866 "trtype": "TCP" 00:17:23.866 }, 00:17:23.866 "peer_address": { 00:17:23.866 "adrfam": "IPv4", 00:17:23.866 "traddr": "10.0.0.1", 00:17:23.866 "trsvcid": "41824", 00:17:23.866 "trtype": "TCP" 00:17:23.866 }, 00:17:23.866 "qid": 0, 00:17:23.866 "state": "enabled", 00:17:23.866 "thread": "nvmf_tgt_poll_group_000" 00:17:23.866 } 00:17:23.866 ]' 00:17:23.866 14:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.866 14:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.866 14:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.129 14:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.129 14:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.129 14:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.129 14:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.129 14:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.388 14:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:17:24.955 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.955 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:24.955 14:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.955 14:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.955 14:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.955 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:24.955 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.955 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.955 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:24.955 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.213 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:25.213 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.213 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.213 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:25.213 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.213 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.213 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.213 14:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.213 14:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.213 14:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.213 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.213 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.780 00:17:25.781 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.781 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.781 14:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.039 { 00:17:26.039 "auth": { 00:17:26.039 "dhgroup": "null", 00:17:26.039 "digest": "sha384", 00:17:26.039 "state": "completed" 00:17:26.039 }, 00:17:26.039 "cntlid": 49, 00:17:26.039 "listen_address": { 00:17:26.039 "adrfam": "IPv4", 00:17:26.039 "traddr": "10.0.0.2", 00:17:26.039 "trsvcid": "4420", 00:17:26.039 "trtype": "TCP" 00:17:26.039 }, 00:17:26.039 "peer_address": { 00:17:26.039 "adrfam": "IPv4", 00:17:26.039 "traddr": "10.0.0.1", 00:17:26.039 "trsvcid": "41848", 00:17:26.039 "trtype": "TCP" 00:17:26.039 }, 00:17:26.039 "qid": 0, 00:17:26.039 "state": "enabled", 00:17:26.039 "thread": "nvmf_tgt_poll_group_000" 00:17:26.039 } 00:17:26.039 ]' 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.039 14:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.298 14:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:17:27.232 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.232 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:27.232 14:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.232 14:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.232 14:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.232 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.232 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:27.232 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:27.492 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:27.492 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.492 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:27.492 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:27.492 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:27.492 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.492 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.492 14:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.492 14:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.492 14:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.492 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.492 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.750 00:17:27.750 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.750 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.750 14:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.317 { 00:17:28.317 "auth": { 00:17:28.317 "dhgroup": "null", 00:17:28.317 "digest": "sha384", 00:17:28.317 "state": "completed" 00:17:28.317 }, 00:17:28.317 "cntlid": 51, 00:17:28.317 "listen_address": { 00:17:28.317 "adrfam": "IPv4", 00:17:28.317 "traddr": "10.0.0.2", 00:17:28.317 "trsvcid": "4420", 00:17:28.317 "trtype": "TCP" 00:17:28.317 }, 00:17:28.317 "peer_address": { 00:17:28.317 "adrfam": "IPv4", 00:17:28.317 "traddr": "10.0.0.1", 00:17:28.317 "trsvcid": "41874", 00:17:28.317 "trtype": "TCP" 00:17:28.317 }, 00:17:28.317 "qid": 0, 00:17:28.317 "state": "enabled", 00:17:28.317 "thread": "nvmf_tgt_poll_group_000" 00:17:28.317 } 00:17:28.317 ]' 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.317 14:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.576 14:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:17:29.511 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.511 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:29.511 14:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.511 14:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.511 14:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.511 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.511 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:29.511 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:29.770 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:29.770 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.770 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.770 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:29.770 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:29.770 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.770 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.770 14:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.770 14:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.770 14:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.770 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.770 14:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.029 00:17:30.029 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.029 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.029 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.287 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.287 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.287 14:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.287 14:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.287 14:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.287 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.287 { 00:17:30.287 "auth": { 00:17:30.287 "dhgroup": "null", 00:17:30.287 "digest": "sha384", 00:17:30.287 "state": "completed" 00:17:30.287 }, 00:17:30.287 "cntlid": 53, 00:17:30.287 "listen_address": { 00:17:30.287 "adrfam": "IPv4", 00:17:30.287 "traddr": "10.0.0.2", 00:17:30.287 "trsvcid": "4420", 00:17:30.287 "trtype": "TCP" 00:17:30.287 }, 00:17:30.287 "peer_address": { 00:17:30.287 "adrfam": "IPv4", 00:17:30.287 "traddr": "10.0.0.1", 00:17:30.287 "trsvcid": "41912", 00:17:30.287 "trtype": "TCP" 00:17:30.287 }, 00:17:30.287 "qid": 0, 00:17:30.287 "state": "enabled", 00:17:30.287 "thread": "nvmf_tgt_poll_group_000" 00:17:30.287 } 00:17:30.287 ]' 00:17:30.287 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.545 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.545 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.545 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:30.545 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.545 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.545 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.545 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.804 14:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:17:31.736 14:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.736 14:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:31.736 14:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.736 14:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 14:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.736 14:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.736 14:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:31.736 14:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:31.736 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:31.736 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.736 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.736 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:31.736 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.736 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.736 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:17:31.736 14:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.736 14:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 14:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.736 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.736 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.304 00:17:32.304 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.304 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.304 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.562 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.562 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.562 14:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.562 14:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.562 14:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.562 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.562 { 00:17:32.562 "auth": { 00:17:32.562 "dhgroup": "null", 00:17:32.562 "digest": "sha384", 00:17:32.562 "state": "completed" 00:17:32.562 }, 00:17:32.562 "cntlid": 55, 00:17:32.562 "listen_address": { 00:17:32.562 "adrfam": "IPv4", 00:17:32.562 "traddr": "10.0.0.2", 00:17:32.562 "trsvcid": "4420", 00:17:32.562 "trtype": "TCP" 00:17:32.562 }, 00:17:32.562 "peer_address": { 00:17:32.562 "adrfam": "IPv4", 00:17:32.562 "traddr": "10.0.0.1", 00:17:32.562 "trsvcid": "57110", 00:17:32.562 "trtype": "TCP" 00:17:32.562 }, 00:17:32.562 "qid": 0, 00:17:32.562 "state": "enabled", 00:17:32.562 "thread": "nvmf_tgt_poll_group_000" 00:17:32.562 } 00:17:32.562 ]' 00:17:32.562 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.562 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.562 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.562 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:32.563 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.563 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.563 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.563 14:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.821 14:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:17:33.761 14:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.761 14:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:33.761 14:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.761 14:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.761 14:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.761 14:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.761 14:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.761 14:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:33.761 14:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:33.761 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:33.761 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.761 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.761 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:33.761 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.761 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.761 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.761 14:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.761 14:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.762 14:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.762 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.762 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.328 00:17:34.328 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.328 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.328 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.586 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.586 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.586 14:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.586 14:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.586 14:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.586 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.586 { 00:17:34.586 "auth": { 00:17:34.586 "dhgroup": "ffdhe2048", 00:17:34.586 "digest": "sha384", 00:17:34.586 "state": "completed" 00:17:34.586 }, 00:17:34.586 "cntlid": 57, 00:17:34.586 "listen_address": { 00:17:34.586 "adrfam": "IPv4", 00:17:34.586 "traddr": "10.0.0.2", 00:17:34.586 "trsvcid": "4420", 00:17:34.586 "trtype": "TCP" 00:17:34.586 }, 00:17:34.586 "peer_address": { 00:17:34.586 "adrfam": "IPv4", 00:17:34.586 "traddr": "10.0.0.1", 00:17:34.586 "trsvcid": "57152", 00:17:34.586 "trtype": "TCP" 00:17:34.586 }, 00:17:34.586 "qid": 0, 00:17:34.586 "state": "enabled", 00:17:34.586 "thread": "nvmf_tgt_poll_group_000" 00:17:34.586 } 00:17:34.586 ]' 00:17:34.586 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.586 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.586 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.844 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.844 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.844 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.844 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.844 14:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.102 14:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:17:36.035 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.035 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:36.035 14:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.035 14:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.035 14:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.035 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.035 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.035 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.293 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:36.293 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.293 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:36.293 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:36.293 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:36.293 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.293 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.293 14:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.293 14:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.293 14:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.293 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.293 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.551 00:17:36.551 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.551 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.551 14:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.117 { 00:17:37.117 "auth": { 00:17:37.117 "dhgroup": "ffdhe2048", 00:17:37.117 "digest": "sha384", 00:17:37.117 "state": "completed" 00:17:37.117 }, 00:17:37.117 "cntlid": 59, 00:17:37.117 "listen_address": { 00:17:37.117 "adrfam": "IPv4", 00:17:37.117 "traddr": "10.0.0.2", 00:17:37.117 "trsvcid": "4420", 00:17:37.117 "trtype": "TCP" 00:17:37.117 }, 00:17:37.117 "peer_address": { 00:17:37.117 "adrfam": "IPv4", 00:17:37.117 "traddr": "10.0.0.1", 00:17:37.117 "trsvcid": "57172", 00:17:37.117 "trtype": "TCP" 00:17:37.117 }, 00:17:37.117 "qid": 0, 00:17:37.117 "state": "enabled", 00:17:37.117 "thread": "nvmf_tgt_poll_group_000" 00:17:37.117 } 00:17:37.117 ]' 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.117 14:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.378 14:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:17:38.364 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.364 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:38.364 14:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.364 14:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.364 14:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.364 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.364 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.364 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.622 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:38.622 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.622 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.622 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:38.622 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:38.622 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.622 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.622 14:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.622 14:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.622 14:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.622 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.622 14:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.879 00:17:38.879 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.879 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.879 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.137 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.137 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.137 14:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.137 14:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.137 14:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.137 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.137 { 00:17:39.137 "auth": { 00:17:39.137 "dhgroup": "ffdhe2048", 00:17:39.137 "digest": "sha384", 00:17:39.137 "state": "completed" 00:17:39.137 }, 00:17:39.137 "cntlid": 61, 00:17:39.137 "listen_address": { 00:17:39.137 "adrfam": "IPv4", 00:17:39.137 "traddr": "10.0.0.2", 00:17:39.137 "trsvcid": "4420", 00:17:39.137 "trtype": "TCP" 00:17:39.137 }, 00:17:39.137 "peer_address": { 00:17:39.137 "adrfam": "IPv4", 00:17:39.137 "traddr": "10.0.0.1", 00:17:39.137 "trsvcid": "57198", 00:17:39.137 "trtype": "TCP" 00:17:39.137 }, 00:17:39.137 "qid": 0, 00:17:39.137 "state": "enabled", 00:17:39.137 "thread": "nvmf_tgt_poll_group_000" 00:17:39.137 } 00:17:39.137 ]' 00:17:39.137 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.137 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.137 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.137 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.137 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.395 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.395 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.395 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.653 14:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.583 14:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.150 00:17:41.150 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.150 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.150 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.408 { 00:17:41.408 "auth": { 00:17:41.408 "dhgroup": "ffdhe2048", 00:17:41.408 "digest": "sha384", 00:17:41.408 "state": "completed" 00:17:41.408 }, 00:17:41.408 "cntlid": 63, 00:17:41.408 "listen_address": { 00:17:41.408 "adrfam": "IPv4", 00:17:41.408 "traddr": "10.0.0.2", 00:17:41.408 "trsvcid": "4420", 00:17:41.408 "trtype": "TCP" 00:17:41.408 }, 00:17:41.408 "peer_address": { 00:17:41.408 "adrfam": "IPv4", 00:17:41.408 "traddr": "10.0.0.1", 00:17:41.408 "trsvcid": "57222", 00:17:41.408 "trtype": "TCP" 00:17:41.408 }, 00:17:41.408 "qid": 0, 00:17:41.408 "state": "enabled", 00:17:41.408 "thread": "nvmf_tgt_poll_group_000" 00:17:41.408 } 00:17:41.408 ]' 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.408 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.666 14:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:17:42.599 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.599 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:42.599 14:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.599 14:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.599 14:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.599 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.599 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.599 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:42.599 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:42.856 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:42.856 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.856 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.856 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:42.856 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:42.856 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.856 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.856 14:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.856 14:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.857 14:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.857 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.857 14:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.114 00:17:43.114 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.114 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.114 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.400 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.400 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.400 14:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.400 14:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.400 14:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.400 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.400 { 00:17:43.400 "auth": { 00:17:43.400 "dhgroup": "ffdhe3072", 00:17:43.400 "digest": "sha384", 00:17:43.400 "state": "completed" 00:17:43.400 }, 00:17:43.400 "cntlid": 65, 00:17:43.400 "listen_address": { 00:17:43.400 "adrfam": "IPv4", 00:17:43.400 "traddr": "10.0.0.2", 00:17:43.400 "trsvcid": "4420", 00:17:43.400 "trtype": "TCP" 00:17:43.400 }, 00:17:43.400 "peer_address": { 00:17:43.400 "adrfam": "IPv4", 00:17:43.400 "traddr": "10.0.0.1", 00:17:43.400 "trsvcid": "36706", 00:17:43.400 "trtype": "TCP" 00:17:43.400 }, 00:17:43.400 "qid": 0, 00:17:43.400 "state": "enabled", 00:17:43.400 "thread": "nvmf_tgt_poll_group_000" 00:17:43.400 } 00:17:43.400 ]' 00:17:43.400 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.400 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.400 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.679 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:43.679 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.679 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.679 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.679 14:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.937 14:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:17:44.503 14:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.503 14:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:44.503 14:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.503 14:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.503 14:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.503 14:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.503 14:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:44.503 14:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:44.762 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:44.762 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.762 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:44.762 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:44.762 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:44.762 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.762 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.762 14:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.762 14:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.020 14:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.020 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.020 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.279 00:17:45.279 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.279 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.279 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.537 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.537 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.537 14:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.537 14:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.537 14:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.537 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.537 { 00:17:45.537 "auth": { 00:17:45.537 "dhgroup": "ffdhe3072", 00:17:45.537 "digest": "sha384", 00:17:45.537 "state": "completed" 00:17:45.537 }, 00:17:45.537 "cntlid": 67, 00:17:45.537 "listen_address": { 00:17:45.537 "adrfam": "IPv4", 00:17:45.537 "traddr": "10.0.0.2", 00:17:45.537 "trsvcid": "4420", 00:17:45.537 "trtype": "TCP" 00:17:45.537 }, 00:17:45.537 "peer_address": { 00:17:45.537 "adrfam": "IPv4", 00:17:45.537 "traddr": "10.0.0.1", 00:17:45.537 "trsvcid": "36728", 00:17:45.537 "trtype": "TCP" 00:17:45.537 }, 00:17:45.537 "qid": 0, 00:17:45.537 "state": "enabled", 00:17:45.537 "thread": "nvmf_tgt_poll_group_000" 00:17:45.537 } 00:17:45.537 ]' 00:17:45.537 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.537 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.537 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.795 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:45.795 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.795 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.795 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.795 14:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.053 14:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:17:46.988 14:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.988 14:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:46.988 14:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.988 14:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.988 14:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.988 14:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.988 14:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:46.988 14:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:46.988 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:46.988 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.988 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:46.988 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:46.988 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:46.988 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.988 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.988 14:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.988 14:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.988 14:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.988 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.988 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.556 00:17:47.556 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.556 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.556 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.556 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.556 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.556 14:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.556 14:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.814 14:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.814 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.814 { 00:17:47.814 "auth": { 00:17:47.814 "dhgroup": "ffdhe3072", 00:17:47.814 "digest": "sha384", 00:17:47.814 "state": "completed" 00:17:47.815 }, 00:17:47.815 "cntlid": 69, 00:17:47.815 "listen_address": { 00:17:47.815 "adrfam": "IPv4", 00:17:47.815 "traddr": "10.0.0.2", 00:17:47.815 "trsvcid": "4420", 00:17:47.815 "trtype": "TCP" 00:17:47.815 }, 00:17:47.815 "peer_address": { 00:17:47.815 "adrfam": "IPv4", 00:17:47.815 "traddr": "10.0.0.1", 00:17:47.815 "trsvcid": "36746", 00:17:47.815 "trtype": "TCP" 00:17:47.815 }, 00:17:47.815 "qid": 0, 00:17:47.815 "state": "enabled", 00:17:47.815 "thread": "nvmf_tgt_poll_group_000" 00:17:47.815 } 00:17:47.815 ]' 00:17:47.815 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.815 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.815 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.815 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:47.815 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.815 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.815 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.815 14:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.074 14:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:17:49.008 14:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.008 14:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:49.008 14:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.008 14:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.008 14:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.008 14:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.008 14:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:49.008 14:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:49.008 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:49.008 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.008 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:49.008 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:49.008 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:49.008 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.008 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:17:49.008 14:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.008 14:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.008 14:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.008 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.008 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.573 00:17:49.573 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.573 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.573 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.832 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.832 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.832 14:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.832 14:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.832 14:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.832 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.832 { 00:17:49.832 "auth": { 00:17:49.832 "dhgroup": "ffdhe3072", 00:17:49.832 "digest": "sha384", 00:17:49.832 "state": "completed" 00:17:49.832 }, 00:17:49.832 "cntlid": 71, 00:17:49.832 "listen_address": { 00:17:49.832 "adrfam": "IPv4", 00:17:49.832 "traddr": "10.0.0.2", 00:17:49.832 "trsvcid": "4420", 00:17:49.832 "trtype": "TCP" 00:17:49.832 }, 00:17:49.832 "peer_address": { 00:17:49.832 "adrfam": "IPv4", 00:17:49.832 "traddr": "10.0.0.1", 00:17:49.832 "trsvcid": "36772", 00:17:49.832 "trtype": "TCP" 00:17:49.832 }, 00:17:49.832 "qid": 0, 00:17:49.832 "state": "enabled", 00:17:49.832 "thread": "nvmf_tgt_poll_group_000" 00:17:49.832 } 00:17:49.832 ]' 00:17:49.832 14:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.832 14:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.832 14:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.832 14:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.832 14:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.090 14:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.090 14:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.090 14:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.349 14:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:17:50.917 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.917 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:50.917 14:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.917 14:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.917 14:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.917 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.917 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.917 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:50.917 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:51.183 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:51.183 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.183 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:51.183 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:51.183 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:51.183 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.183 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.183 14:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.183 14:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.183 14:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.183 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.183 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.748 00:17:51.748 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.748 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.748 14:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.006 { 00:17:52.006 "auth": { 00:17:52.006 "dhgroup": "ffdhe4096", 00:17:52.006 "digest": "sha384", 00:17:52.006 "state": "completed" 00:17:52.006 }, 00:17:52.006 "cntlid": 73, 00:17:52.006 "listen_address": { 00:17:52.006 "adrfam": "IPv4", 00:17:52.006 "traddr": "10.0.0.2", 00:17:52.006 "trsvcid": "4420", 00:17:52.006 "trtype": "TCP" 00:17:52.006 }, 00:17:52.006 "peer_address": { 00:17:52.006 "adrfam": "IPv4", 00:17:52.006 "traddr": "10.0.0.1", 00:17:52.006 "trsvcid": "36806", 00:17:52.006 "trtype": "TCP" 00:17:52.006 }, 00:17:52.006 "qid": 0, 00:17:52.006 "state": "enabled", 00:17:52.006 "thread": "nvmf_tgt_poll_group_000" 00:17:52.006 } 00:17:52.006 ]' 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.006 14:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.264 14:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.200 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.767 00:17:53.767 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.767 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.767 14:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.026 { 00:17:54.026 "auth": { 00:17:54.026 "dhgroup": "ffdhe4096", 00:17:54.026 "digest": "sha384", 00:17:54.026 "state": "completed" 00:17:54.026 }, 00:17:54.026 "cntlid": 75, 00:17:54.026 "listen_address": { 00:17:54.026 "adrfam": "IPv4", 00:17:54.026 "traddr": "10.0.0.2", 00:17:54.026 "trsvcid": "4420", 00:17:54.026 "trtype": "TCP" 00:17:54.026 }, 00:17:54.026 "peer_address": { 00:17:54.026 "adrfam": "IPv4", 00:17:54.026 "traddr": "10.0.0.1", 00:17:54.026 "trsvcid": "59492", 00:17:54.026 "trtype": "TCP" 00:17:54.026 }, 00:17:54.026 "qid": 0, 00:17:54.026 "state": "enabled", 00:17:54.026 "thread": "nvmf_tgt_poll_group_000" 00:17:54.026 } 00:17:54.026 ]' 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.026 14:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.343 14:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.279 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.847 00:17:55.847 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.847 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.847 14:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.106 14:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.106 14:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.106 14:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.106 14:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.106 14:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.106 14:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.106 { 00:17:56.106 "auth": { 00:17:56.106 "dhgroup": "ffdhe4096", 00:17:56.106 "digest": "sha384", 00:17:56.106 "state": "completed" 00:17:56.106 }, 00:17:56.106 "cntlid": 77, 00:17:56.106 "listen_address": { 00:17:56.106 "adrfam": "IPv4", 00:17:56.106 "traddr": "10.0.0.2", 00:17:56.106 "trsvcid": "4420", 00:17:56.106 "trtype": "TCP" 00:17:56.106 }, 00:17:56.106 "peer_address": { 00:17:56.106 "adrfam": "IPv4", 00:17:56.106 "traddr": "10.0.0.1", 00:17:56.106 "trsvcid": "59522", 00:17:56.106 "trtype": "TCP" 00:17:56.106 }, 00:17:56.106 "qid": 0, 00:17:56.106 "state": "enabled", 00:17:56.106 "thread": "nvmf_tgt_poll_group_000" 00:17:56.106 } 00:17:56.106 ]' 00:17:56.106 14:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.106 14:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.106 14:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.106 14:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.106 14:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.364 14:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.364 14:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.364 14:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.622 14:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:17:57.189 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.189 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:57.189 14:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.189 14:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.189 14:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.189 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.189 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.189 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.448 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:57.448 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.448 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.448 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:57.448 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:57.448 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.448 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:17:57.448 14:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.448 14:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.448 14:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.448 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.449 14:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.016 00:17:58.016 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.016 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.016 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.274 { 00:17:58.274 "auth": { 00:17:58.274 "dhgroup": "ffdhe4096", 00:17:58.274 "digest": "sha384", 00:17:58.274 "state": "completed" 00:17:58.274 }, 00:17:58.274 "cntlid": 79, 00:17:58.274 "listen_address": { 00:17:58.274 "adrfam": "IPv4", 00:17:58.274 "traddr": "10.0.0.2", 00:17:58.274 "trsvcid": "4420", 00:17:58.274 "trtype": "TCP" 00:17:58.274 }, 00:17:58.274 "peer_address": { 00:17:58.274 "adrfam": "IPv4", 00:17:58.274 "traddr": "10.0.0.1", 00:17:58.274 "trsvcid": "59546", 00:17:58.274 "trtype": "TCP" 00:17:58.274 }, 00:17:58.274 "qid": 0, 00:17:58.274 "state": "enabled", 00:17:58.274 "thread": "nvmf_tgt_poll_group_000" 00:17:58.274 } 00:17:58.274 ]' 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.274 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.838 14:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:17:59.402 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.402 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:17:59.402 14:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.402 14:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.402 14:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.402 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.402 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.402 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:59.402 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:59.658 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:59.658 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.658 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.658 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:59.658 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.658 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.658 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.658 14:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.658 14:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.658 14:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.658 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.658 14:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.227 00:18:00.227 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.227 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.227 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.484 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.484 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.484 14:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.484 14:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.484 14:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.484 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.484 { 00:18:00.484 "auth": { 00:18:00.484 "dhgroup": "ffdhe6144", 00:18:00.484 "digest": "sha384", 00:18:00.484 "state": "completed" 00:18:00.484 }, 00:18:00.484 "cntlid": 81, 00:18:00.484 "listen_address": { 00:18:00.484 "adrfam": "IPv4", 00:18:00.484 "traddr": "10.0.0.2", 00:18:00.484 "trsvcid": "4420", 00:18:00.484 "trtype": "TCP" 00:18:00.484 }, 00:18:00.484 "peer_address": { 00:18:00.484 "adrfam": "IPv4", 00:18:00.484 "traddr": "10.0.0.1", 00:18:00.484 "trsvcid": "59580", 00:18:00.484 "trtype": "TCP" 00:18:00.484 }, 00:18:00.484 "qid": 0, 00:18:00.484 "state": "enabled", 00:18:00.484 "thread": "nvmf_tgt_poll_group_000" 00:18:00.484 } 00:18:00.484 ]' 00:18:00.484 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.484 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.484 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.742 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.742 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.742 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.742 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.742 14:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.999 14:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:18:01.565 14:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.565 14:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:01.565 14:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.565 14:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.565 14:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.565 14:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.565 14:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:01.565 14:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:02.131 14:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:02.131 14:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.131 14:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.131 14:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:02.131 14:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.131 14:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.131 14:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.131 14:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.131 14:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.131 14:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.131 14:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.131 14:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.389 00:18:02.647 14:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.647 14:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.647 14:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.905 14:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.905 14:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.905 14:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.905 14:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.905 14:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.905 14:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.905 { 00:18:02.905 "auth": { 00:18:02.905 "dhgroup": "ffdhe6144", 00:18:02.905 "digest": "sha384", 00:18:02.905 "state": "completed" 00:18:02.905 }, 00:18:02.905 "cntlid": 83, 00:18:02.905 "listen_address": { 00:18:02.905 "adrfam": "IPv4", 00:18:02.905 "traddr": "10.0.0.2", 00:18:02.905 "trsvcid": "4420", 00:18:02.905 "trtype": "TCP" 00:18:02.905 }, 00:18:02.906 "peer_address": { 00:18:02.906 "adrfam": "IPv4", 00:18:02.906 "traddr": "10.0.0.1", 00:18:02.906 "trsvcid": "56766", 00:18:02.906 "trtype": "TCP" 00:18:02.906 }, 00:18:02.906 "qid": 0, 00:18:02.906 "state": "enabled", 00:18:02.906 "thread": "nvmf_tgt_poll_group_000" 00:18:02.906 } 00:18:02.906 ]' 00:18:02.906 14:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.906 14:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.906 14:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.906 14:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.906 14:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.906 14:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.906 14:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.906 14:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.164 14:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:18:04.099 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.099 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:04.099 14:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.099 14:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.099 14:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.099 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.099 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:04.099 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:04.357 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:04.357 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.357 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:04.357 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:04.357 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.357 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.357 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.357 14:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.357 14:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.357 14:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.357 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.357 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.615 00:18:04.615 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.615 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.615 14:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.183 { 00:18:05.183 "auth": { 00:18:05.183 "dhgroup": "ffdhe6144", 00:18:05.183 "digest": "sha384", 00:18:05.183 "state": "completed" 00:18:05.183 }, 00:18:05.183 "cntlid": 85, 00:18:05.183 "listen_address": { 00:18:05.183 "adrfam": "IPv4", 00:18:05.183 "traddr": "10.0.0.2", 00:18:05.183 "trsvcid": "4420", 00:18:05.183 "trtype": "TCP" 00:18:05.183 }, 00:18:05.183 "peer_address": { 00:18:05.183 "adrfam": "IPv4", 00:18:05.183 "traddr": "10.0.0.1", 00:18:05.183 "trsvcid": "56788", 00:18:05.183 "trtype": "TCP" 00:18:05.183 }, 00:18:05.183 "qid": 0, 00:18:05.183 "state": "enabled", 00:18:05.183 "thread": "nvmf_tgt_poll_group_000" 00:18:05.183 } 00:18:05.183 ]' 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.183 14:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.441 14:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:18:06.376 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.376 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:06.376 14:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.376 14:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.376 14:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.376 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.376 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.376 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.634 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:06.634 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.634 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:06.634 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:06.634 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.634 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.634 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:18:06.634 14:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.634 14:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.634 14:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.634 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.634 14:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.892 00:18:07.150 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.150 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.150 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.409 { 00:18:07.409 "auth": { 00:18:07.409 "dhgroup": "ffdhe6144", 00:18:07.409 "digest": "sha384", 00:18:07.409 "state": "completed" 00:18:07.409 }, 00:18:07.409 "cntlid": 87, 00:18:07.409 "listen_address": { 00:18:07.409 "adrfam": "IPv4", 00:18:07.409 "traddr": "10.0.0.2", 00:18:07.409 "trsvcid": "4420", 00:18:07.409 "trtype": "TCP" 00:18:07.409 }, 00:18:07.409 "peer_address": { 00:18:07.409 "adrfam": "IPv4", 00:18:07.409 "traddr": "10.0.0.1", 00:18:07.409 "trsvcid": "56820", 00:18:07.409 "trtype": "TCP" 00:18:07.409 }, 00:18:07.409 "qid": 0, 00:18:07.409 "state": "enabled", 00:18:07.409 "thread": "nvmf_tgt_poll_group_000" 00:18:07.409 } 00:18:07.409 ]' 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.409 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.667 14:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:18:08.600 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.600 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:08.600 14:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.600 14:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.600 14:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.600 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.600 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.600 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.600 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.859 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:08.859 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.859 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.859 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:08.859 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.859 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.859 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.859 14:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.859 14:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.859 14:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.859 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.859 14:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.426 00:18:09.426 14:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.426 14:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.426 14:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.685 14:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.685 14:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.685 14:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.685 14:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.685 14:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.685 14:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.685 { 00:18:09.685 "auth": { 00:18:09.685 "dhgroup": "ffdhe8192", 00:18:09.685 "digest": "sha384", 00:18:09.685 "state": "completed" 00:18:09.685 }, 00:18:09.685 "cntlid": 89, 00:18:09.685 "listen_address": { 00:18:09.685 "adrfam": "IPv4", 00:18:09.685 "traddr": "10.0.0.2", 00:18:09.685 "trsvcid": "4420", 00:18:09.685 "trtype": "TCP" 00:18:09.685 }, 00:18:09.685 "peer_address": { 00:18:09.685 "adrfam": "IPv4", 00:18:09.685 "traddr": "10.0.0.1", 00:18:09.685 "trsvcid": "56860", 00:18:09.685 "trtype": "TCP" 00:18:09.685 }, 00:18:09.685 "qid": 0, 00:18:09.685 "state": "enabled", 00:18:09.685 "thread": "nvmf_tgt_poll_group_000" 00:18:09.685 } 00:18:09.685 ]' 00:18:09.685 14:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.943 14:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.943 14:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.943 14:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.943 14:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.943 14:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.943 14:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.943 14:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.202 14:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.282 14:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.540 14:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.540 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.541 14:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.107 00:18:12.107 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.107 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.107 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.365 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.365 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.365 14:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.365 14:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.365 14:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.365 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.365 { 00:18:12.365 "auth": { 00:18:12.365 "dhgroup": "ffdhe8192", 00:18:12.365 "digest": "sha384", 00:18:12.365 "state": "completed" 00:18:12.365 }, 00:18:12.365 "cntlid": 91, 00:18:12.365 "listen_address": { 00:18:12.365 "adrfam": "IPv4", 00:18:12.365 "traddr": "10.0.0.2", 00:18:12.365 "trsvcid": "4420", 00:18:12.365 "trtype": "TCP" 00:18:12.365 }, 00:18:12.365 "peer_address": { 00:18:12.365 "adrfam": "IPv4", 00:18:12.365 "traddr": "10.0.0.1", 00:18:12.365 "trsvcid": "34504", 00:18:12.365 "trtype": "TCP" 00:18:12.365 }, 00:18:12.365 "qid": 0, 00:18:12.365 "state": "enabled", 00:18:12.365 "thread": "nvmf_tgt_poll_group_000" 00:18:12.365 } 00:18:12.365 ]' 00:18:12.365 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.624 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.624 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.624 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.624 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.624 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.624 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.624 14:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.882 14:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:18:13.818 14:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.818 14:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:13.818 14:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.818 14:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.818 14:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.818 14:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.818 14:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:13.818 14:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:14.076 14:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:14.076 14:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.076 14:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:14.076 14:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:14.076 14:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:14.076 14:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.076 14:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.076 14:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.076 14:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.076 14:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.076 14:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.076 14:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.639 00:18:14.639 14:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.639 14:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.639 14:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.896 14:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.896 14:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.896 14:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.896 14:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.896 14:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.896 14:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.896 { 00:18:14.896 "auth": { 00:18:14.896 "dhgroup": "ffdhe8192", 00:18:14.896 "digest": "sha384", 00:18:14.896 "state": "completed" 00:18:14.896 }, 00:18:14.896 "cntlid": 93, 00:18:14.896 "listen_address": { 00:18:14.896 "adrfam": "IPv4", 00:18:14.896 "traddr": "10.0.0.2", 00:18:14.896 "trsvcid": "4420", 00:18:14.896 "trtype": "TCP" 00:18:14.896 }, 00:18:14.896 "peer_address": { 00:18:14.896 "adrfam": "IPv4", 00:18:14.896 "traddr": "10.0.0.1", 00:18:14.896 "trsvcid": "34530", 00:18:14.896 "trtype": "TCP" 00:18:14.896 }, 00:18:14.896 "qid": 0, 00:18:14.896 "state": "enabled", 00:18:14.896 "thread": "nvmf_tgt_poll_group_000" 00:18:14.896 } 00:18:14.896 ]' 00:18:14.896 14:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.154 14:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.154 14:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.154 14:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.154 14:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.154 14:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.154 14:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.154 14:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.412 14:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:18:16.346 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.346 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:16.346 14:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.346 14:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.346 14:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.346 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.346 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:16.346 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:16.604 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:16.604 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.604 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:16.604 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:16.604 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:16.604 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.604 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:18:16.604 14:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.604 14:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.604 14:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.604 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.604 14:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.538 00:18:17.538 14:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.538 14:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.538 14:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.796 14:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.796 14:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.796 14:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.796 14:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.796 14:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.796 14:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.796 { 00:18:17.796 "auth": { 00:18:17.796 "dhgroup": "ffdhe8192", 00:18:17.796 "digest": "sha384", 00:18:17.796 "state": "completed" 00:18:17.796 }, 00:18:17.796 "cntlid": 95, 00:18:17.796 "listen_address": { 00:18:17.796 "adrfam": "IPv4", 00:18:17.796 "traddr": "10.0.0.2", 00:18:17.796 "trsvcid": "4420", 00:18:17.796 "trtype": "TCP" 00:18:17.796 }, 00:18:17.796 "peer_address": { 00:18:17.796 "adrfam": "IPv4", 00:18:17.796 "traddr": "10.0.0.1", 00:18:17.796 "trsvcid": "34542", 00:18:17.796 "trtype": "TCP" 00:18:17.796 }, 00:18:17.796 "qid": 0, 00:18:17.796 "state": "enabled", 00:18:17.796 "thread": "nvmf_tgt_poll_group_000" 00:18:17.796 } 00:18:17.796 ]' 00:18:17.796 14:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.796 14:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.796 14:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.796 14:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.796 14:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.796 14:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.796 14:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.796 14:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.054 14:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:18:18.988 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.988 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:18.988 14:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.988 14:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.988 14:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.988 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:18.988 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.988 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.988 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:18.988 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:19.246 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:19.246 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.246 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.246 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:19.246 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:19.246 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.246 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.246 14:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.246 14:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.246 14:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.246 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.246 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.504 00:18:19.504 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.504 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.504 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.762 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.762 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.762 14:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.762 14:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.762 14:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.762 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.762 { 00:18:19.762 "auth": { 00:18:19.762 "dhgroup": "null", 00:18:19.762 "digest": "sha512", 00:18:19.762 "state": "completed" 00:18:19.762 }, 00:18:19.762 "cntlid": 97, 00:18:19.762 "listen_address": { 00:18:19.762 "adrfam": "IPv4", 00:18:19.762 "traddr": "10.0.0.2", 00:18:19.762 "trsvcid": "4420", 00:18:19.762 "trtype": "TCP" 00:18:19.762 }, 00:18:19.762 "peer_address": { 00:18:19.762 "adrfam": "IPv4", 00:18:19.762 "traddr": "10.0.0.1", 00:18:19.762 "trsvcid": "34558", 00:18:19.762 "trtype": "TCP" 00:18:19.762 }, 00:18:19.762 "qid": 0, 00:18:19.762 "state": "enabled", 00:18:19.762 "thread": "nvmf_tgt_poll_group_000" 00:18:19.762 } 00:18:19.762 ]' 00:18:19.762 14:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.762 14:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.762 14:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.020 14:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:20.020 14:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.020 14:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.020 14:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.020 14:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.278 14:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.214 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.844 00:18:21.844 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.844 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.844 14:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.103 { 00:18:22.103 "auth": { 00:18:22.103 "dhgroup": "null", 00:18:22.103 "digest": "sha512", 00:18:22.103 "state": "completed" 00:18:22.103 }, 00:18:22.103 "cntlid": 99, 00:18:22.103 "listen_address": { 00:18:22.103 "adrfam": "IPv4", 00:18:22.103 "traddr": "10.0.0.2", 00:18:22.103 "trsvcid": "4420", 00:18:22.103 "trtype": "TCP" 00:18:22.103 }, 00:18:22.103 "peer_address": { 00:18:22.103 "adrfam": "IPv4", 00:18:22.103 "traddr": "10.0.0.1", 00:18:22.103 "trsvcid": "34596", 00:18:22.103 "trtype": "TCP" 00:18:22.103 }, 00:18:22.103 "qid": 0, 00:18:22.103 "state": "enabled", 00:18:22.103 "thread": "nvmf_tgt_poll_group_000" 00:18:22.103 } 00:18:22.103 ]' 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.103 14:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.423 14:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:18:23.360 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.360 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:23.360 14:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.360 14:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.360 14:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.360 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.360 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:23.360 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:23.619 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:23.619 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.619 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.619 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:23.619 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.619 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.619 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.619 14:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.619 14:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.619 14:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.619 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.619 14:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.878 00:18:23.878 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.878 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.878 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.136 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.136 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.136 14:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.136 14:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.136 14:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.136 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.136 { 00:18:24.136 "auth": { 00:18:24.136 "dhgroup": "null", 00:18:24.137 "digest": "sha512", 00:18:24.137 "state": "completed" 00:18:24.137 }, 00:18:24.137 "cntlid": 101, 00:18:24.137 "listen_address": { 00:18:24.137 "adrfam": "IPv4", 00:18:24.137 "traddr": "10.0.0.2", 00:18:24.137 "trsvcid": "4420", 00:18:24.137 "trtype": "TCP" 00:18:24.137 }, 00:18:24.137 "peer_address": { 00:18:24.137 "adrfam": "IPv4", 00:18:24.137 "traddr": "10.0.0.1", 00:18:24.137 "trsvcid": "50664", 00:18:24.137 "trtype": "TCP" 00:18:24.137 }, 00:18:24.137 "qid": 0, 00:18:24.137 "state": "enabled", 00:18:24.137 "thread": "nvmf_tgt_poll_group_000" 00:18:24.137 } 00:18:24.137 ]' 00:18:24.137 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.137 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.137 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.137 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:24.137 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.395 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.395 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.395 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.654 14:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:18:25.220 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.220 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:25.220 14:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.220 14:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.220 14:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.220 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.220 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:25.220 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:25.479 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:25.479 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.479 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.479 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:25.479 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.479 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.479 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:18:25.479 14:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.479 14:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.479 14:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.479 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.479 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.738 00:18:25.738 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.738 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.738 14:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.997 14:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.997 14:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.997 14:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.997 14:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.997 14:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.997 14:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.997 { 00:18:25.997 "auth": { 00:18:25.997 "dhgroup": "null", 00:18:25.997 "digest": "sha512", 00:18:25.997 "state": "completed" 00:18:25.997 }, 00:18:25.997 "cntlid": 103, 00:18:25.997 "listen_address": { 00:18:25.997 "adrfam": "IPv4", 00:18:25.997 "traddr": "10.0.0.2", 00:18:25.997 "trsvcid": "4420", 00:18:25.997 "trtype": "TCP" 00:18:25.997 }, 00:18:25.997 "peer_address": { 00:18:25.997 "adrfam": "IPv4", 00:18:25.997 "traddr": "10.0.0.1", 00:18:25.997 "trsvcid": "50700", 00:18:25.997 "trtype": "TCP" 00:18:25.997 }, 00:18:25.997 "qid": 0, 00:18:25.997 "state": "enabled", 00:18:25.997 "thread": "nvmf_tgt_poll_group_000" 00:18:25.997 } 00:18:25.997 ]' 00:18:25.997 14:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.997 14:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.997 14:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.255 14:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:26.255 14:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.255 14:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.255 14:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.255 14:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.515 14:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.452 14:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.019 00:18:28.019 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.019 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.019 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.277 { 00:18:28.277 "auth": { 00:18:28.277 "dhgroup": "ffdhe2048", 00:18:28.277 "digest": "sha512", 00:18:28.277 "state": "completed" 00:18:28.277 }, 00:18:28.277 "cntlid": 105, 00:18:28.277 "listen_address": { 00:18:28.277 "adrfam": "IPv4", 00:18:28.277 "traddr": "10.0.0.2", 00:18:28.277 "trsvcid": "4420", 00:18:28.277 "trtype": "TCP" 00:18:28.277 }, 00:18:28.277 "peer_address": { 00:18:28.277 "adrfam": "IPv4", 00:18:28.277 "traddr": "10.0.0.1", 00:18:28.277 "trsvcid": "50726", 00:18:28.277 "trtype": "TCP" 00:18:28.277 }, 00:18:28.277 "qid": 0, 00:18:28.277 "state": "enabled", 00:18:28.277 "thread": "nvmf_tgt_poll_group_000" 00:18:28.277 } 00:18:28.277 ]' 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.277 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.843 14:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:18:29.778 14:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.778 14:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:29.778 14:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.778 14:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.778 14:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.778 14:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.778 14:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:29.778 14:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:29.778 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:29.778 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.778 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.778 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:29.778 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:29.778 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.778 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.778 14:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.778 14:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.778 14:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.778 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.778 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.368 00:18:30.368 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.368 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.368 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.645 { 00:18:30.645 "auth": { 00:18:30.645 "dhgroup": "ffdhe2048", 00:18:30.645 "digest": "sha512", 00:18:30.645 "state": "completed" 00:18:30.645 }, 00:18:30.645 "cntlid": 107, 00:18:30.645 "listen_address": { 00:18:30.645 "adrfam": "IPv4", 00:18:30.645 "traddr": "10.0.0.2", 00:18:30.645 "trsvcid": "4420", 00:18:30.645 "trtype": "TCP" 00:18:30.645 }, 00:18:30.645 "peer_address": { 00:18:30.645 "adrfam": "IPv4", 00:18:30.645 "traddr": "10.0.0.1", 00:18:30.645 "trsvcid": "50742", 00:18:30.645 "trtype": "TCP" 00:18:30.645 }, 00:18:30.645 "qid": 0, 00:18:30.645 "state": "enabled", 00:18:30.645 "thread": "nvmf_tgt_poll_group_000" 00:18:30.645 } 00:18:30.645 ]' 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.645 14:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.904 14:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:18:31.837 14:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.837 14:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:31.837 14:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.837 14:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.837 14:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.837 14:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.837 14:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:31.837 14:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.096 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:32.096 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.096 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:32.096 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:32.096 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:32.096 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.096 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.096 14:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.096 14:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.096 14:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.096 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.096 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.354 00:18:32.354 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.354 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.354 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.920 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.920 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.920 14:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.920 14:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.920 14:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.920 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.920 { 00:18:32.920 "auth": { 00:18:32.920 "dhgroup": "ffdhe2048", 00:18:32.920 "digest": "sha512", 00:18:32.920 "state": "completed" 00:18:32.920 }, 00:18:32.920 "cntlid": 109, 00:18:32.920 "listen_address": { 00:18:32.920 "adrfam": "IPv4", 00:18:32.920 "traddr": "10.0.0.2", 00:18:32.920 "trsvcid": "4420", 00:18:32.920 "trtype": "TCP" 00:18:32.920 }, 00:18:32.920 "peer_address": { 00:18:32.920 "adrfam": "IPv4", 00:18:32.920 "traddr": "10.0.0.1", 00:18:32.920 "trsvcid": "55342", 00:18:32.920 "trtype": "TCP" 00:18:32.920 }, 00:18:32.920 "qid": 0, 00:18:32.920 "state": "enabled", 00:18:32.920 "thread": "nvmf_tgt_poll_group_000" 00:18:32.920 } 00:18:32.920 ]' 00:18:32.920 14:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.920 14:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.920 14:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.920 14:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:32.920 14:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.920 14:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.920 14:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.920 14:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.178 14:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.111 14:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.369 14:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.369 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.369 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.628 00:18:34.628 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.628 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.628 14:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.887 14:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.887 14:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.887 14:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.887 14:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.887 14:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.887 14:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.887 { 00:18:34.887 "auth": { 00:18:34.887 "dhgroup": "ffdhe2048", 00:18:34.887 "digest": "sha512", 00:18:34.887 "state": "completed" 00:18:34.887 }, 00:18:34.887 "cntlid": 111, 00:18:34.887 "listen_address": { 00:18:34.887 "adrfam": "IPv4", 00:18:34.887 "traddr": "10.0.0.2", 00:18:34.887 "trsvcid": "4420", 00:18:34.887 "trtype": "TCP" 00:18:34.887 }, 00:18:34.887 "peer_address": { 00:18:34.887 "adrfam": "IPv4", 00:18:34.887 "traddr": "10.0.0.1", 00:18:34.887 "trsvcid": "55362", 00:18:34.887 "trtype": "TCP" 00:18:34.887 }, 00:18:34.887 "qid": 0, 00:18:34.887 "state": "enabled", 00:18:34.887 "thread": "nvmf_tgt_poll_group_000" 00:18:34.887 } 00:18:34.887 ]' 00:18:34.887 14:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.887 14:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.887 14:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.887 14:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:34.887 14:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.144 14:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.144 14:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.144 14:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.402 14:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:18:35.966 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.966 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:35.966 14:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.966 14:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.966 14:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.966 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.966 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.966 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:35.966 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:36.224 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:36.224 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.224 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:36.224 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:36.224 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:36.224 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.224 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.224 14:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.224 14:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.224 14:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.224 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.224 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.790 00:18:36.790 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.790 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.790 14:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.048 { 00:18:37.048 "auth": { 00:18:37.048 "dhgroup": "ffdhe3072", 00:18:37.048 "digest": "sha512", 00:18:37.048 "state": "completed" 00:18:37.048 }, 00:18:37.048 "cntlid": 113, 00:18:37.048 "listen_address": { 00:18:37.048 "adrfam": "IPv4", 00:18:37.048 "traddr": "10.0.0.2", 00:18:37.048 "trsvcid": "4420", 00:18:37.048 "trtype": "TCP" 00:18:37.048 }, 00:18:37.048 "peer_address": { 00:18:37.048 "adrfam": "IPv4", 00:18:37.048 "traddr": "10.0.0.1", 00:18:37.048 "trsvcid": "55372", 00:18:37.048 "trtype": "TCP" 00:18:37.048 }, 00:18:37.048 "qid": 0, 00:18:37.048 "state": "enabled", 00:18:37.048 "thread": "nvmf_tgt_poll_group_000" 00:18:37.048 } 00:18:37.048 ]' 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.048 14:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.306 14:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:18:38.240 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.240 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:38.241 14:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.241 14:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.241 14:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.241 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.241 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:38.241 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:38.499 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:38.499 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.499 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:38.499 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:38.499 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.499 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.499 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.499 14:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.499 14:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.499 14:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.499 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.499 14:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.757 00:18:38.757 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.757 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.757 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.324 { 00:18:39.324 "auth": { 00:18:39.324 "dhgroup": "ffdhe3072", 00:18:39.324 "digest": "sha512", 00:18:39.324 "state": "completed" 00:18:39.324 }, 00:18:39.324 "cntlid": 115, 00:18:39.324 "listen_address": { 00:18:39.324 "adrfam": "IPv4", 00:18:39.324 "traddr": "10.0.0.2", 00:18:39.324 "trsvcid": "4420", 00:18:39.324 "trtype": "TCP" 00:18:39.324 }, 00:18:39.324 "peer_address": { 00:18:39.324 "adrfam": "IPv4", 00:18:39.324 "traddr": "10.0.0.1", 00:18:39.324 "trsvcid": "55394", 00:18:39.324 "trtype": "TCP" 00:18:39.324 }, 00:18:39.324 "qid": 0, 00:18:39.324 "state": "enabled", 00:18:39.324 "thread": "nvmf_tgt_poll_group_000" 00:18:39.324 } 00:18:39.324 ]' 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.324 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.583 14:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:18:40.520 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.520 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:40.520 14:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.520 14:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.520 14:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.520 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.520 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.520 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.852 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:40.852 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.852 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:40.852 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:40.852 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.852 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.852 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.852 14:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.852 14:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.852 14:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.852 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.852 14:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.109 00:18:41.109 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.109 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.109 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.366 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.366 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.366 14:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.366 14:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.366 14:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.366 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.366 { 00:18:41.366 "auth": { 00:18:41.366 "dhgroup": "ffdhe3072", 00:18:41.366 "digest": "sha512", 00:18:41.366 "state": "completed" 00:18:41.366 }, 00:18:41.366 "cntlid": 117, 00:18:41.366 "listen_address": { 00:18:41.366 "adrfam": "IPv4", 00:18:41.366 "traddr": "10.0.0.2", 00:18:41.366 "trsvcid": "4420", 00:18:41.366 "trtype": "TCP" 00:18:41.366 }, 00:18:41.366 "peer_address": { 00:18:41.366 "adrfam": "IPv4", 00:18:41.366 "traddr": "10.0.0.1", 00:18:41.366 "trsvcid": "55418", 00:18:41.366 "trtype": "TCP" 00:18:41.366 }, 00:18:41.366 "qid": 0, 00:18:41.366 "state": "enabled", 00:18:41.366 "thread": "nvmf_tgt_poll_group_000" 00:18:41.366 } 00:18:41.366 ]' 00:18:41.366 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.623 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.623 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.623 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.623 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.623 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.623 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.623 14:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.880 14:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:18:42.811 14:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.811 14:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:42.811 14:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.811 14:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.811 14:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.811 14:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.811 14:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.811 14:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.811 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:42.811 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.811 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.811 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:42.811 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.811 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.811 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:18:42.811 14:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.811 14:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.811 14:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.811 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.811 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.375 00:18:43.375 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.375 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.375 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.633 { 00:18:43.633 "auth": { 00:18:43.633 "dhgroup": "ffdhe3072", 00:18:43.633 "digest": "sha512", 00:18:43.633 "state": "completed" 00:18:43.633 }, 00:18:43.633 "cntlid": 119, 00:18:43.633 "listen_address": { 00:18:43.633 "adrfam": "IPv4", 00:18:43.633 "traddr": "10.0.0.2", 00:18:43.633 "trsvcid": "4420", 00:18:43.633 "trtype": "TCP" 00:18:43.633 }, 00:18:43.633 "peer_address": { 00:18:43.633 "adrfam": "IPv4", 00:18:43.633 "traddr": "10.0.0.1", 00:18:43.633 "trsvcid": "43708", 00:18:43.633 "trtype": "TCP" 00:18:43.633 }, 00:18:43.633 "qid": 0, 00:18:43.633 "state": "enabled", 00:18:43.633 "thread": "nvmf_tgt_poll_group_000" 00:18:43.633 } 00:18:43.633 ]' 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.633 14:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.892 14:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:18:44.826 14:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.826 14:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:44.826 14:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.826 14:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.826 14:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.826 14:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.826 14:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.826 14:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:44.826 14:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.084 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:45.084 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.084 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.084 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:45.084 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:45.084 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.084 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.084 14:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.084 14:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.084 14:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.084 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.084 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.342 00:18:45.342 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.342 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.342 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.906 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.906 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.906 14:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.906 14:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.906 14:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.906 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.906 { 00:18:45.906 "auth": { 00:18:45.906 "dhgroup": "ffdhe4096", 00:18:45.906 "digest": "sha512", 00:18:45.906 "state": "completed" 00:18:45.906 }, 00:18:45.906 "cntlid": 121, 00:18:45.906 "listen_address": { 00:18:45.906 "adrfam": "IPv4", 00:18:45.906 "traddr": "10.0.0.2", 00:18:45.906 "trsvcid": "4420", 00:18:45.906 "trtype": "TCP" 00:18:45.906 }, 00:18:45.906 "peer_address": { 00:18:45.906 "adrfam": "IPv4", 00:18:45.906 "traddr": "10.0.0.1", 00:18:45.906 "trsvcid": "43728", 00:18:45.906 "trtype": "TCP" 00:18:45.906 }, 00:18:45.906 "qid": 0, 00:18:45.906 "state": "enabled", 00:18:45.906 "thread": "nvmf_tgt_poll_group_000" 00:18:45.906 } 00:18:45.906 ]' 00:18:45.906 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.906 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.906 14:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.906 14:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:45.906 14:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.906 14:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.906 14:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.906 14:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.163 14:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:18:47.094 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.094 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:47.095 14:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.095 14:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.095 14:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.095 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.095 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:47.095 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:47.352 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:47.352 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.352 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:47.352 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:47.352 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:47.352 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.352 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.352 14:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.352 14:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.352 14:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.352 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.352 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.918 00:18:47.918 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.918 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.918 14:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.918 14:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.918 14:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.918 14:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.918 14:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.918 14:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.918 14:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.918 { 00:18:47.918 "auth": { 00:18:47.918 "dhgroup": "ffdhe4096", 00:18:47.918 "digest": "sha512", 00:18:47.918 "state": "completed" 00:18:47.918 }, 00:18:47.918 "cntlid": 123, 00:18:47.918 "listen_address": { 00:18:47.918 "adrfam": "IPv4", 00:18:47.918 "traddr": "10.0.0.2", 00:18:47.918 "trsvcid": "4420", 00:18:47.918 "trtype": "TCP" 00:18:47.918 }, 00:18:47.918 "peer_address": { 00:18:47.918 "adrfam": "IPv4", 00:18:47.918 "traddr": "10.0.0.1", 00:18:47.918 "trsvcid": "43760", 00:18:47.918 "trtype": "TCP" 00:18:47.918 }, 00:18:47.918 "qid": 0, 00:18:47.918 "state": "enabled", 00:18:47.918 "thread": "nvmf_tgt_poll_group_000" 00:18:47.918 } 00:18:47.918 ]' 00:18:47.918 14:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.176 14:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.176 14:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.176 14:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.176 14:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.176 14:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.176 14:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.176 14:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.434 14:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.367 14:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.932 00:18:49.932 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.932 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.932 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.190 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.190 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.190 14:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.190 14:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.190 14:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.190 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.190 { 00:18:50.190 "auth": { 00:18:50.190 "dhgroup": "ffdhe4096", 00:18:50.190 "digest": "sha512", 00:18:50.190 "state": "completed" 00:18:50.190 }, 00:18:50.190 "cntlid": 125, 00:18:50.190 "listen_address": { 00:18:50.190 "adrfam": "IPv4", 00:18:50.190 "traddr": "10.0.0.2", 00:18:50.190 "trsvcid": "4420", 00:18:50.190 "trtype": "TCP" 00:18:50.190 }, 00:18:50.190 "peer_address": { 00:18:50.190 "adrfam": "IPv4", 00:18:50.190 "traddr": "10.0.0.1", 00:18:50.190 "trsvcid": "43780", 00:18:50.190 "trtype": "TCP" 00:18:50.190 }, 00:18:50.190 "qid": 0, 00:18:50.190 "state": "enabled", 00:18:50.190 "thread": "nvmf_tgt_poll_group_000" 00:18:50.190 } 00:18:50.190 ]' 00:18:50.190 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.190 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.190 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.190 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:50.190 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.448 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.448 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.448 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.707 14:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:18:51.270 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.270 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:51.270 14:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.270 14:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.270 14:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.270 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.270 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:51.270 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:51.528 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:51.528 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.528 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:51.528 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:51.528 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.528 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.528 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:18:51.528 14:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.528 14:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.528 14:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.528 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.528 14:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.093 00:18:52.093 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.093 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.093 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.352 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.352 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.352 14:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.352 14:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.352 14:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.352 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.352 { 00:18:52.352 "auth": { 00:18:52.352 "dhgroup": "ffdhe4096", 00:18:52.352 "digest": "sha512", 00:18:52.352 "state": "completed" 00:18:52.352 }, 00:18:52.352 "cntlid": 127, 00:18:52.352 "listen_address": { 00:18:52.352 "adrfam": "IPv4", 00:18:52.352 "traddr": "10.0.0.2", 00:18:52.352 "trsvcid": "4420", 00:18:52.352 "trtype": "TCP" 00:18:52.352 }, 00:18:52.352 "peer_address": { 00:18:52.352 "adrfam": "IPv4", 00:18:52.352 "traddr": "10.0.0.1", 00:18:52.352 "trsvcid": "42510", 00:18:52.352 "trtype": "TCP" 00:18:52.352 }, 00:18:52.352 "qid": 0, 00:18:52.352 "state": "enabled", 00:18:52.352 "thread": "nvmf_tgt_poll_group_000" 00:18:52.352 } 00:18:52.352 ]' 00:18:52.352 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.352 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.352 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.352 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:52.352 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.611 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.611 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.611 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.869 14:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:18:53.811 14:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.811 14:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:53.811 14:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.811 14:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.811 14:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.811 14:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.811 14:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.811 14:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:53.811 14:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:54.069 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:54.069 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.069 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.069 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:54.069 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:54.069 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.070 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.070 14:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.070 14:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.070 14:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.070 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.070 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.635 00:18:54.635 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.635 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.635 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.635 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.635 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.635 14:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.635 14:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.894 14:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.894 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.894 { 00:18:54.894 "auth": { 00:18:54.894 "dhgroup": "ffdhe6144", 00:18:54.894 "digest": "sha512", 00:18:54.894 "state": "completed" 00:18:54.894 }, 00:18:54.894 "cntlid": 129, 00:18:54.894 "listen_address": { 00:18:54.894 "adrfam": "IPv4", 00:18:54.894 "traddr": "10.0.0.2", 00:18:54.894 "trsvcid": "4420", 00:18:54.894 "trtype": "TCP" 00:18:54.894 }, 00:18:54.894 "peer_address": { 00:18:54.894 "adrfam": "IPv4", 00:18:54.894 "traddr": "10.0.0.1", 00:18:54.894 "trsvcid": "42546", 00:18:54.894 "trtype": "TCP" 00:18:54.894 }, 00:18:54.894 "qid": 0, 00:18:54.894 "state": "enabled", 00:18:54.894 "thread": "nvmf_tgt_poll_group_000" 00:18:54.894 } 00:18:54.894 ]' 00:18:54.894 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.894 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.894 14:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.894 14:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.894 14:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.894 14:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.894 14:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.894 14:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.153 14:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:18:56.087 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.087 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:56.087 14:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.087 14:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.087 14:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.087 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.087 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.087 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.346 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:56.346 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.346 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.346 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:56.346 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.346 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.346 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.346 14:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.346 14:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.346 14:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.346 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.346 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.973 00:18:56.973 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.973 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.973 14:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.973 14:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.973 14:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.973 14:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.973 14:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.232 14:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.232 14:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.232 { 00:18:57.232 "auth": { 00:18:57.232 "dhgroup": "ffdhe6144", 00:18:57.232 "digest": "sha512", 00:18:57.232 "state": "completed" 00:18:57.232 }, 00:18:57.232 "cntlid": 131, 00:18:57.232 "listen_address": { 00:18:57.232 "adrfam": "IPv4", 00:18:57.232 "traddr": "10.0.0.2", 00:18:57.232 "trsvcid": "4420", 00:18:57.232 "trtype": "TCP" 00:18:57.232 }, 00:18:57.232 "peer_address": { 00:18:57.232 "adrfam": "IPv4", 00:18:57.232 "traddr": "10.0.0.1", 00:18:57.232 "trsvcid": "42574", 00:18:57.232 "trtype": "TCP" 00:18:57.232 }, 00:18:57.232 "qid": 0, 00:18:57.232 "state": "enabled", 00:18:57.232 "thread": "nvmf_tgt_poll_group_000" 00:18:57.232 } 00:18:57.232 ]' 00:18:57.232 14:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.232 14:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.232 14:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.232 14:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.232 14:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.232 14:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.232 14:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.232 14:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.491 14:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:18:58.426 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.426 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:18:58.426 14:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.426 14:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.426 14:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.426 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.426 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:58.426 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:58.684 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:58.684 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.684 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.684 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:58.684 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.684 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.684 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.684 14:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.684 14:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.684 14:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.684 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.684 14:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.274 00:18:59.274 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.274 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.274 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.532 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.532 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.532 14:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.532 14:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.532 14:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.532 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.532 { 00:18:59.532 "auth": { 00:18:59.532 "dhgroup": "ffdhe6144", 00:18:59.532 "digest": "sha512", 00:18:59.532 "state": "completed" 00:18:59.532 }, 00:18:59.532 "cntlid": 133, 00:18:59.532 "listen_address": { 00:18:59.532 "adrfam": "IPv4", 00:18:59.532 "traddr": "10.0.0.2", 00:18:59.532 "trsvcid": "4420", 00:18:59.532 "trtype": "TCP" 00:18:59.532 }, 00:18:59.532 "peer_address": { 00:18:59.532 "adrfam": "IPv4", 00:18:59.532 "traddr": "10.0.0.1", 00:18:59.532 "trsvcid": "42606", 00:18:59.532 "trtype": "TCP" 00:18:59.532 }, 00:18:59.532 "qid": 0, 00:18:59.532 "state": "enabled", 00:18:59.532 "thread": "nvmf_tgt_poll_group_000" 00:18:59.532 } 00:18:59.532 ]' 00:18:59.532 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.532 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.532 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.532 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.532 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.790 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.790 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.790 14:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.049 14:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:19:00.984 14:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.984 14:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:00.984 14:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.984 14:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.984 14:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.984 14:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.984 14:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:00.984 14:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:00.984 14:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:00.984 14:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.984 14:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.984 14:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:00.984 14:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:00.984 14:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.984 14:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:19:00.984 14:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.984 14:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.984 14:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.984 14:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.984 14:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.550 00:19:01.550 14:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.550 14:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.550 14:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.808 14:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.808 14:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.808 14:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.808 14:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.808 14:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.808 14:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.808 { 00:19:01.808 "auth": { 00:19:01.808 "dhgroup": "ffdhe6144", 00:19:01.808 "digest": "sha512", 00:19:01.808 "state": "completed" 00:19:01.808 }, 00:19:01.808 "cntlid": 135, 00:19:01.808 "listen_address": { 00:19:01.808 "adrfam": "IPv4", 00:19:01.808 "traddr": "10.0.0.2", 00:19:01.808 "trsvcid": "4420", 00:19:01.808 "trtype": "TCP" 00:19:01.808 }, 00:19:01.808 "peer_address": { 00:19:01.808 "adrfam": "IPv4", 00:19:01.808 "traddr": "10.0.0.1", 00:19:01.808 "trsvcid": "42628", 00:19:01.808 "trtype": "TCP" 00:19:01.808 }, 00:19:01.808 "qid": 0, 00:19:01.808 "state": "enabled", 00:19:01.808 "thread": "nvmf_tgt_poll_group_000" 00:19:01.808 } 00:19:01.808 ]' 00:19:01.808 14:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.808 14:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.066 14:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.066 14:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:02.066 14:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.066 14:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.066 14:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.066 14:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.324 14:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.259 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.260 14:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.194 00:19:04.194 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.194 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.194 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.452 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.453 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.453 14:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.453 14:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.453 14:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.453 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.453 { 00:19:04.453 "auth": { 00:19:04.453 "dhgroup": "ffdhe8192", 00:19:04.453 "digest": "sha512", 00:19:04.453 "state": "completed" 00:19:04.453 }, 00:19:04.453 "cntlid": 137, 00:19:04.453 "listen_address": { 00:19:04.453 "adrfam": "IPv4", 00:19:04.453 "traddr": "10.0.0.2", 00:19:04.453 "trsvcid": "4420", 00:19:04.453 "trtype": "TCP" 00:19:04.453 }, 00:19:04.453 "peer_address": { 00:19:04.453 "adrfam": "IPv4", 00:19:04.453 "traddr": "10.0.0.1", 00:19:04.453 "trsvcid": "58686", 00:19:04.453 "trtype": "TCP" 00:19:04.453 }, 00:19:04.453 "qid": 0, 00:19:04.453 "state": "enabled", 00:19:04.453 "thread": "nvmf_tgt_poll_group_000" 00:19:04.453 } 00:19:04.453 ]' 00:19:04.453 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.453 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.453 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.453 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.453 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.453 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.453 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.453 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.711 14:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.646 14:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.647 14:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.647 14:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.647 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.647 14:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.581 00:19:06.581 14:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.581 14:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.581 14:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.839 14:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.839 14:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.839 14:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.839 14:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.839 14:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.839 14:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.839 { 00:19:06.839 "auth": { 00:19:06.839 "dhgroup": "ffdhe8192", 00:19:06.839 "digest": "sha512", 00:19:06.839 "state": "completed" 00:19:06.839 }, 00:19:06.839 "cntlid": 139, 00:19:06.839 "listen_address": { 00:19:06.839 "adrfam": "IPv4", 00:19:06.839 "traddr": "10.0.0.2", 00:19:06.839 "trsvcid": "4420", 00:19:06.839 "trtype": "TCP" 00:19:06.839 }, 00:19:06.839 "peer_address": { 00:19:06.839 "adrfam": "IPv4", 00:19:06.839 "traddr": "10.0.0.1", 00:19:06.839 "trsvcid": "58712", 00:19:06.839 "trtype": "TCP" 00:19:06.839 }, 00:19:06.839 "qid": 0, 00:19:06.839 "state": "enabled", 00:19:06.839 "thread": "nvmf_tgt_poll_group_000" 00:19:06.839 } 00:19:06.839 ]' 00:19:06.839 14:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.839 14:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.839 14:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.839 14:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.839 14:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.839 14:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.839 14:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.839 14:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.096 14:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:01:ZjRkZWNjNmJkNjY2YWQ3NGY2YWMzM2M1YTE5ZDI1ZWJFLMcm: --dhchap-ctrl-secret DHHC-1:02:ZDliNmI5Y2YxZjZkYjMyYmNmMTZkYWYzZWI4MDdlY2M2NmI1ZGVmYTYzYWYzMmE1jv0a6g==: 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.028 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.960 00:19:08.961 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.961 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.961 14:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.218 { 00:19:09.218 "auth": { 00:19:09.218 "dhgroup": "ffdhe8192", 00:19:09.218 "digest": "sha512", 00:19:09.218 "state": "completed" 00:19:09.218 }, 00:19:09.218 "cntlid": 141, 00:19:09.218 "listen_address": { 00:19:09.218 "adrfam": "IPv4", 00:19:09.218 "traddr": "10.0.0.2", 00:19:09.218 "trsvcid": "4420", 00:19:09.218 "trtype": "TCP" 00:19:09.218 }, 00:19:09.218 "peer_address": { 00:19:09.218 "adrfam": "IPv4", 00:19:09.218 "traddr": "10.0.0.1", 00:19:09.218 "trsvcid": "58742", 00:19:09.218 "trtype": "TCP" 00:19:09.218 }, 00:19:09.218 "qid": 0, 00:19:09.218 "state": "enabled", 00:19:09.218 "thread": "nvmf_tgt_poll_group_000" 00:19:09.218 } 00:19:09.218 ]' 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.218 14:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.476 14:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:02:YWU5MmY4Yjg0ZTRlMmEwZjg1OTE4NjFlOGU3MmZlMGM2NWVmM2Y5MTNiOTcyOTdjYzJ8bw==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MWUzMGYyZGNkNzdlZTdhZTVmMjQ4MTcxNmI1NWEga3S7: 00:19:10.409 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.409 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:10.409 14:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.409 14:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.409 14:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.409 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.409 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.409 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.667 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:10.667 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.667 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.667 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:10.667 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:10.667 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.667 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:19:10.667 14:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.667 14:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.667 14:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.667 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.667 14:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.232 00:19:11.232 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.232 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.232 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.490 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.490 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.490 14:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.490 14:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.490 14:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.490 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.490 { 00:19:11.490 "auth": { 00:19:11.490 "dhgroup": "ffdhe8192", 00:19:11.490 "digest": "sha512", 00:19:11.490 "state": "completed" 00:19:11.490 }, 00:19:11.490 "cntlid": 143, 00:19:11.490 "listen_address": { 00:19:11.490 "adrfam": "IPv4", 00:19:11.490 "traddr": "10.0.0.2", 00:19:11.490 "trsvcid": "4420", 00:19:11.490 "trtype": "TCP" 00:19:11.490 }, 00:19:11.490 "peer_address": { 00:19:11.490 "adrfam": "IPv4", 00:19:11.490 "traddr": "10.0.0.1", 00:19:11.490 "trsvcid": "58764", 00:19:11.490 "trtype": "TCP" 00:19:11.490 }, 00:19:11.490 "qid": 0, 00:19:11.490 "state": "enabled", 00:19:11.490 "thread": "nvmf_tgt_poll_group_000" 00:19:11.490 } 00:19:11.490 ]' 00:19:11.490 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.490 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.490 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.490 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:11.490 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.748 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.748 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.748 14:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.005 14:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:19:12.570 14:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.570 14:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:12.570 14:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.570 14:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.570 14:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.570 14:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:12.570 14:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:12.570 14:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:12.570 14:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:12.570 14:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:12.570 14:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:12.828 14:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:12.828 14:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.828 14:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.828 14:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:12.828 14:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:12.828 14:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.828 14:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.828 14:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.828 14:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.828 14:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.828 14:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.828 14:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.760 00:19:13.760 14:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.760 14:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.760 14:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.018 { 00:19:14.018 "auth": { 00:19:14.018 "dhgroup": "ffdhe8192", 00:19:14.018 "digest": "sha512", 00:19:14.018 "state": "completed" 00:19:14.018 }, 00:19:14.018 "cntlid": 145, 00:19:14.018 "listen_address": { 00:19:14.018 "adrfam": "IPv4", 00:19:14.018 "traddr": "10.0.0.2", 00:19:14.018 "trsvcid": "4420", 00:19:14.018 "trtype": "TCP" 00:19:14.018 }, 00:19:14.018 "peer_address": { 00:19:14.018 "adrfam": "IPv4", 00:19:14.018 "traddr": "10.0.0.1", 00:19:14.018 "trsvcid": "55156", 00:19:14.018 "trtype": "TCP" 00:19:14.018 }, 00:19:14.018 "qid": 0, 00:19:14.018 "state": "enabled", 00:19:14.018 "thread": "nvmf_tgt_poll_group_000" 00:19:14.018 } 00:19:14.018 ]' 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.018 14:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.275 14:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:00:NWU2NjE5MzlkYjM3OTU1NGRhMTBjMzk4OTE3ZGIwN2ZmYjNhNGNlZmFlMWQ3YWNiMSNgow==: --dhchap-ctrl-secret DHHC-1:03:ZDVlZjk5NTQwZTllYWYzNzQyZjUxMjFhMWFiYjlmMTg1ZmQyMzc2ODU1OTIzMzFjYWIyYmVmNTgxNjVmNmE3Na08ckc=: 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:15.208 14:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:15.773 2024/07/10 14:37:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:15.773 request: 00:19:15.773 { 00:19:15.773 "method": "bdev_nvme_attach_controller", 00:19:15.773 "params": { 00:19:15.773 "name": "nvme0", 00:19:15.773 "trtype": "tcp", 00:19:15.774 "traddr": "10.0.0.2", 00:19:15.774 "adrfam": "ipv4", 00:19:15.774 "trsvcid": "4420", 00:19:15.774 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:15.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9", 00:19:15.774 "prchk_reftag": false, 00:19:15.774 "prchk_guard": false, 00:19:15.774 "hdgst": false, 00:19:15.774 "ddgst": false, 00:19:15.774 "dhchap_key": "key2" 00:19:15.774 } 00:19:15.774 } 00:19:15.774 Got JSON-RPC error response 00:19:15.774 GoRPCClient: error on JSON-RPC call 00:19:15.774 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:15.774 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:15.774 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:15.774 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:15.774 14:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:15.774 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.774 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.774 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.774 14:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.774 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.774 14:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.774 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.774 14:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:15.774 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:15.774 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:15.774 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:15.774 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:15.774 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:15.774 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:15.774 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:15.774 14:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:16.708 2024/07/10 14:37:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:16.708 request: 00:19:16.708 { 00:19:16.708 "method": "bdev_nvme_attach_controller", 00:19:16.708 "params": { 00:19:16.708 "name": "nvme0", 00:19:16.708 "trtype": "tcp", 00:19:16.708 "traddr": "10.0.0.2", 00:19:16.708 "adrfam": "ipv4", 00:19:16.708 "trsvcid": "4420", 00:19:16.708 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:16.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9", 00:19:16.708 "prchk_reftag": false, 00:19:16.708 "prchk_guard": false, 00:19:16.708 "hdgst": false, 00:19:16.708 "ddgst": false, 00:19:16.708 "dhchap_key": "key1", 00:19:16.708 "dhchap_ctrlr_key": "ckey2" 00:19:16.708 } 00:19:16.708 } 00:19:16.708 Got JSON-RPC error response 00:19:16.708 GoRPCClient: error on JSON-RPC call 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key1 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.708 14:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.709 14:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.274 2024/07/10 14:37:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:17.274 request: 00:19:17.274 { 00:19:17.274 "method": "bdev_nvme_attach_controller", 00:19:17.274 "params": { 00:19:17.274 "name": "nvme0", 00:19:17.274 "trtype": "tcp", 00:19:17.274 "traddr": "10.0.0.2", 00:19:17.274 "adrfam": "ipv4", 00:19:17.274 "trsvcid": "4420", 00:19:17.274 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:17.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9", 00:19:17.274 "prchk_reftag": false, 00:19:17.274 "prchk_guard": false, 00:19:17.274 "hdgst": false, 00:19:17.274 "ddgst": false, 00:19:17.274 "dhchap_key": "key1", 00:19:17.274 "dhchap_ctrlr_key": "ckey1" 00:19:17.274 } 00:19:17.274 } 00:19:17.274 Got JSON-RPC error response 00:19:17.274 GoRPCClient: error on JSON-RPC call 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 94486 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 94486 ']' 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 94486 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94486 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:17.274 killing process with pid 94486 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94486' 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 94486 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 94486 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=99438 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 99438 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 99438 ']' 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.274 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.537 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.537 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:17.537 14:37:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:17.537 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:17.537 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.830 14:37:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.830 14:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:17.830 14:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 99438 00:19:17.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.830 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 99438 ']' 00:19:17.830 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.830 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.830 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.830 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.830 14:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.088 14:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.654 00:19:18.654 14:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.654 14:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.654 14:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.220 { 00:19:19.220 "auth": { 00:19:19.220 "dhgroup": "ffdhe8192", 00:19:19.220 "digest": "sha512", 00:19:19.220 "state": "completed" 00:19:19.220 }, 00:19:19.220 "cntlid": 1, 00:19:19.220 "listen_address": { 00:19:19.220 "adrfam": "IPv4", 00:19:19.220 "traddr": "10.0.0.2", 00:19:19.220 "trsvcid": "4420", 00:19:19.220 "trtype": "TCP" 00:19:19.220 }, 00:19:19.220 "peer_address": { 00:19:19.220 "adrfam": "IPv4", 00:19:19.220 "traddr": "10.0.0.1", 00:19:19.220 "trsvcid": "55194", 00:19:19.220 "trtype": "TCP" 00:19:19.220 }, 00:19:19.220 "qid": 0, 00:19:19.220 "state": "enabled", 00:19:19.220 "thread": "nvmf_tgt_poll_group_000" 00:19:19.220 } 00:19:19.220 ]' 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.220 14:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.478 14:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid 29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-secret DHHC-1:03:NzI5NTViMTU5MWY3NmI3NTJjMmUzZDg4MDE5NWY2NzkzYTRiN2Q3MWU4M2UzYWNiOTdkZjE1YTliNzZkZmNkMzNDPj4=: 00:19:20.411 14:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.411 14:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:20.411 14:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.411 14:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.411 14:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.411 14:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --dhchap-key key3 00:19:20.411 14:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.411 14:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.411 14:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.411 14:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:20.411 14:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:20.669 14:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.669 14:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:20.669 14:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.669 14:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:20.669 14:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.669 14:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:20.669 14:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.669 14:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.669 14:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.929 2024/07/10 14:37:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:20.929 request: 00:19:20.929 { 00:19:20.929 "method": "bdev_nvme_attach_controller", 00:19:20.929 "params": { 00:19:20.929 "name": "nvme0", 00:19:20.929 "trtype": "tcp", 00:19:20.929 "traddr": "10.0.0.2", 00:19:20.929 "adrfam": "ipv4", 00:19:20.929 "trsvcid": "4420", 00:19:20.929 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:20.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9", 00:19:20.929 "prchk_reftag": false, 00:19:20.929 "prchk_guard": false, 00:19:20.929 "hdgst": false, 00:19:20.929 "ddgst": false, 00:19:20.929 "dhchap_key": "key3" 00:19:20.929 } 00:19:20.929 } 00:19:20.929 Got JSON-RPC error response 00:19:20.929 GoRPCClient: error on JSON-RPC call 00:19:20.929 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:20.929 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:20.929 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:20.929 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:20.929 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:20.929 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:20.929 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:20.929 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:21.191 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.191 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:21.191 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.191 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:21.191 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.191 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:21.191 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.191 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.191 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.448 2024/07/10 14:37:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:21.448 request: 00:19:21.448 { 00:19:21.448 "method": "bdev_nvme_attach_controller", 00:19:21.448 "params": { 00:19:21.448 "name": "nvme0", 00:19:21.448 "trtype": "tcp", 00:19:21.448 "traddr": "10.0.0.2", 00:19:21.448 "adrfam": "ipv4", 00:19:21.448 "trsvcid": "4420", 00:19:21.448 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:21.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9", 00:19:21.448 "prchk_reftag": false, 00:19:21.448 "prchk_guard": false, 00:19:21.448 "hdgst": false, 00:19:21.448 "ddgst": false, 00:19:21.448 "dhchap_key": "key3" 00:19:21.448 } 00:19:21.448 } 00:19:21.448 Got JSON-RPC error response 00:19:21.448 GoRPCClient: error on JSON-RPC call 00:19:21.448 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:21.448 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:21.448 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:21.448 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:21.448 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:21.448 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:21.448 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:21.448 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.448 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.448 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.705 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:21.706 14:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:21.964 2024/07/10 14:37:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:21.964 request: 00:19:21.964 { 00:19:21.964 "method": "bdev_nvme_attach_controller", 00:19:21.964 "params": { 00:19:21.964 "name": "nvme0", 00:19:21.964 "trtype": "tcp", 00:19:21.964 "traddr": "10.0.0.2", 00:19:21.964 "adrfam": "ipv4", 00:19:21.964 "trsvcid": "4420", 00:19:21.964 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:21.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9", 00:19:21.964 "prchk_reftag": false, 00:19:21.964 "prchk_guard": false, 00:19:21.964 "hdgst": false, 00:19:21.964 "ddgst": false, 00:19:21.964 "dhchap_key": "key0", 00:19:21.964 "dhchap_ctrlr_key": "key1" 00:19:21.964 } 00:19:21.964 } 00:19:21.964 Got JSON-RPC error response 00:19:21.964 GoRPCClient: error on JSON-RPC call 00:19:22.221 14:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:22.221 14:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:22.221 14:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:22.221 14:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:22.221 14:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:22.221 14:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:22.478 00:19:22.478 14:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:22.478 14:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:22.478 14:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.736 14:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.736 14:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.736 14:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.993 14:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:22.993 14:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:22.993 14:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 94516 00:19:22.993 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 94516 ']' 00:19:22.993 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 94516 00:19:22.993 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:22.993 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:22.993 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94516 00:19:22.993 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:22.993 killing process with pid 94516 00:19:22.993 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:22.993 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94516' 00:19:22.993 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 94516 00:19:22.993 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 94516 00:19:23.250 14:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:23.250 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:23.250 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:23.250 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.250 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:23.250 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.250 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.250 rmmod nvme_tcp 00:19:23.508 rmmod nvme_fabrics 00:19:23.508 rmmod nvme_keyring 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 99438 ']' 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 99438 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 99438 ']' 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 99438 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99438 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:23.508 killing process with pid 99438 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99438' 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 99438 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 99438 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Wye /tmp/spdk.key-sha256.CEY /tmp/spdk.key-sha384.ZEG /tmp/spdk.key-sha512.ghB /tmp/spdk.key-sha512.lZ6 /tmp/spdk.key-sha384.wv0 /tmp/spdk.key-sha256.e0d '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:19:23.508 00:19:23.508 real 3m0.573s 00:19:23.508 user 7m19.629s 00:19:23.508 sys 0m21.619s 00:19:23.508 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:23.767 ************************************ 00:19:23.767 END TEST nvmf_auth_target 00:19:23.767 ************************************ 00:19:23.767 14:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.767 14:37:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:23.767 14:37:35 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:23.767 14:37:35 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:23.767 14:37:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:23.767 14:37:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:23.767 14:37:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:23.767 ************************************ 00:19:23.767 START TEST nvmf_bdevio_no_huge 00:19:23.767 ************************************ 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:23.767 * Looking for test storage... 00:19:23.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:23.767 Cannot find device "nvmf_tgt_br" 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:23.767 Cannot find device "nvmf_tgt_br2" 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:23.767 14:37:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:23.767 Cannot find device "nvmf_tgt_br" 00:19:23.767 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:19:23.767 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:23.767 Cannot find device "nvmf_tgt_br2" 00:19:23.767 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:19:23.767 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:23.767 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:24.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:24.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:24.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:19:24.025 00:19:24.025 --- 10.0.0.2 ping statistics --- 00:19:24.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.025 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:24.025 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:24.025 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:24.025 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:19:24.025 00:19:24.025 --- 10.0.0.3 ping statistics --- 00:19:24.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.026 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:24.026 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:24.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:24.026 00:19:24.026 --- 10.0.0.1 ping statistics --- 00:19:24.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.026 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:24.026 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.026 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:19:24.026 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.026 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.026 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:24.026 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:24.026 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.026 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:24.026 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:24.284 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:24.284 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.284 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:24.284 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.284 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=99835 00:19:24.284 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 99835 00:19:24.284 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 99835 ']' 00:19:24.284 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.284 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:24.284 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.284 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.284 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.284 14:37:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.284 [2024-07-10 14:37:36.389061] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:19:24.284 [2024-07-10 14:37:36.389174] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:24.284 [2024-07-10 14:37:36.533553] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:24.284 [2024-07-10 14:37:36.537097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:24.542 [2024-07-10 14:37:36.637270] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.542 [2024-07-10 14:37:36.637350] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.542 [2024-07-10 14:37:36.637363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.542 [2024-07-10 14:37:36.637374] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.542 [2024-07-10 14:37:36.637383] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.542 [2024-07-10 14:37:36.637464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:24.542 [2024-07-10 14:37:36.637610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:24.542 [2024-07-10 14:37:36.637699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.542 [2024-07-10 14:37:36.637692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:25.478 [2024-07-10 14:37:37.556931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:25.478 Malloc0 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:25.478 [2024-07-10 14:37:37.596039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.478 { 00:19:25.478 "params": { 00:19:25.478 "name": "Nvme$subsystem", 00:19:25.478 "trtype": "$TEST_TRANSPORT", 00:19:25.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.478 "adrfam": "ipv4", 00:19:25.478 "trsvcid": "$NVMF_PORT", 00:19:25.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.478 "hdgst": ${hdgst:-false}, 00:19:25.478 "ddgst": ${ddgst:-false} 00:19:25.478 }, 00:19:25.478 "method": "bdev_nvme_attach_controller" 00:19:25.478 } 00:19:25.478 EOF 00:19:25.478 )") 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:25.478 14:37:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:25.478 "params": { 00:19:25.478 "name": "Nvme1", 00:19:25.478 "trtype": "tcp", 00:19:25.478 "traddr": "10.0.0.2", 00:19:25.478 "adrfam": "ipv4", 00:19:25.478 "trsvcid": "4420", 00:19:25.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.478 "hdgst": false, 00:19:25.478 "ddgst": false 00:19:25.478 }, 00:19:25.478 "method": "bdev_nvme_attach_controller" 00:19:25.478 }' 00:19:25.478 [2024-07-10 14:37:37.662979] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:19:25.478 [2024-07-10 14:37:37.663516] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid99890 ] 00:19:25.737 [2024-07-10 14:37:37.801075] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:25.737 [2024-07-10 14:37:37.803450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:25.737 [2024-07-10 14:37:37.942119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.737 [2024-07-10 14:37:37.942224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.737 [2024-07-10 14:37:37.942235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.994 I/O targets: 00:19:25.994 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:25.994 00:19:25.994 00:19:25.994 CUnit - A unit testing framework for C - Version 2.1-3 00:19:25.994 http://cunit.sourceforge.net/ 00:19:25.994 00:19:25.994 00:19:25.994 Suite: bdevio tests on: Nvme1n1 00:19:25.994 Test: blockdev write read block ...passed 00:19:25.994 Test: blockdev write zeroes read block ...passed 00:19:25.994 Test: blockdev write zeroes read no split ...passed 00:19:25.994 Test: blockdev write zeroes read split ...passed 00:19:25.994 Test: blockdev write zeroes read split partial ...passed 00:19:25.994 Test: blockdev reset ...[2024-07-10 14:37:38.233405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.994 [2024-07-10 14:37:38.233586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66090 (9): Bad file descriptor 00:19:25.994 [2024-07-10 14:37:38.248260] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.994 passed 00:19:25.994 Test: blockdev write read 8 blocks ...passed 00:19:25.994 Test: blockdev write read size > 128k ...passed 00:19:25.994 Test: blockdev write read invalid size ...passed 00:19:26.252 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.252 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.252 Test: blockdev write read max offset ...passed 00:19:26.252 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.252 Test: blockdev writev readv 8 blocks ...passed 00:19:26.252 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.252 Test: blockdev writev readv block ...passed 00:19:26.252 Test: blockdev writev readv size > 128k ...passed 00:19:26.252 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.252 Test: blockdev comparev and writev ...[2024-07-10 14:37:38.420613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.252 [2024-07-10 14:37:38.420683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:26.252 [2024-07-10 14:37:38.420708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.252 [2024-07-10 14:37:38.420735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.252 [2024-07-10 14:37:38.421103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.252 [2024-07-10 14:37:38.421130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:26.252 [2024-07-10 14:37:38.421153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.252 [2024-07-10 14:37:38.421167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:26.252 [2024-07-10 14:37:38.421837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.252 [2024-07-10 14:37:38.421865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:26.252 [2024-07-10 14:37:38.421883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.252 [2024-07-10 14:37:38.421893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:26.252 [2024-07-10 14:37:38.422186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.252 [2024-07-10 14:37:38.422207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:26.252 [2024-07-10 14:37:38.422224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.252 [2024-07-10 14:37:38.422234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:26.252 passed 00:19:26.252 Test: blockdev nvme passthru rw ...passed 00:19:26.252 Test: blockdev nvme passthru vendor specific ...[2024-07-10 14:37:38.504863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:26.252 [2024-07-10 14:37:38.504927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:26.252 [2024-07-10 14:37:38.505122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:26.252 [2024-07-10 14:37:38.505141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:26.252 [2024-07-10 14:37:38.505274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:26.252 [2024-07-10 14:37:38.505309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:26.252 [2024-07-10 14:37:38.505441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:26.252 [2024-07-10 14:37:38.505465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:26.252 passed 00:19:26.252 Test: blockdev nvme admin passthru ...passed 00:19:26.510 Test: blockdev copy ...passed 00:19:26.510 00:19:26.510 Run Summary: Type Total Ran Passed Failed Inactive 00:19:26.510 suites 1 1 n/a 0 0 00:19:26.510 tests 23 23 23 0 0 00:19:26.510 asserts 152 152 152 0 n/a 00:19:26.510 00:19:26.510 Elapsed time = 0.912 seconds 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:26.768 rmmod nvme_tcp 00:19:26.768 rmmod nvme_fabrics 00:19:26.768 rmmod nvme_keyring 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 99835 ']' 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 99835 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 99835 ']' 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 99835 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99835 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99835' 00:19:26.768 killing process with pid 99835 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 99835 00:19:26.768 14:37:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 99835 00:19:27.334 14:37:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:27.334 14:37:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:27.334 14:37:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:27.334 14:37:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:27.334 14:37:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:27.334 14:37:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.334 14:37:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.334 14:37:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.334 14:37:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:27.334 00:19:27.334 real 0m3.512s 00:19:27.334 user 0m12.618s 00:19:27.334 sys 0m1.207s 00:19:27.334 14:37:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:27.334 14:37:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:27.334 ************************************ 00:19:27.334 END TEST nvmf_bdevio_no_huge 00:19:27.334 ************************************ 00:19:27.334 14:37:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:27.334 14:37:39 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:27.334 14:37:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:27.334 14:37:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.334 14:37:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:27.334 ************************************ 00:19:27.334 START TEST nvmf_tls 00:19:27.334 ************************************ 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:27.334 * Looking for test storage... 00:19:27.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.334 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:27.335 Cannot find device "nvmf_tgt_br" 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:27.335 Cannot find device "nvmf_tgt_br2" 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:27.335 Cannot find device "nvmf_tgt_br" 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:27.335 Cannot find device "nvmf_tgt_br2" 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:27.335 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:27.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:27.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:27.593 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:27.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:19:27.594 00:19:27.594 --- 10.0.0.2 ping statistics --- 00:19:27.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.594 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:27.594 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:27.594 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:27.594 00:19:27.594 --- 10.0.0.3 ping statistics --- 00:19:27.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.594 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:27.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:27.594 00:19:27.594 --- 10.0.0.1 ping statistics --- 00:19:27.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.594 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100076 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100076 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100076 ']' 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.594 14:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.853 [2024-07-10 14:37:39.909678] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:19:27.853 [2024-07-10 14:37:39.909774] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.853 [2024-07-10 14:37:40.033479] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:27.853 [2024-07-10 14:37:40.056378] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.853 [2024-07-10 14:37:40.096664] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.853 [2024-07-10 14:37:40.096730] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.853 [2024-07-10 14:37:40.096745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.853 [2024-07-10 14:37:40.096756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.853 [2024-07-10 14:37:40.096765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.853 [2024-07-10 14:37:40.096794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.111 14:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.111 14:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:28.111 14:37:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.111 14:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:28.111 14:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.111 14:37:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.111 14:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:28.112 14:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:28.370 true 00:19:28.370 14:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:28.370 14:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:28.628 14:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:28.628 14:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:28.628 14:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:28.886 14:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:28.886 14:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:29.144 14:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:29.144 14:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:29.144 14:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:29.402 14:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:29.402 14:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:29.660 14:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:29.660 14:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:29.660 14:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:29.660 14:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:29.918 14:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:29.918 14:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:29.918 14:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:30.176 14:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:30.176 14:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:30.435 14:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:30.435 14:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:30.435 14:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:30.694 14:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:30.694 14:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:30.952 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:30.952 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:30.952 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:30.952 14:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:30.952 14:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:30.952 14:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:30.952 14:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:30.952 14:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:30.952 14:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.kfisWWiByd 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.mg5DatZNCS 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.kfisWWiByd 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.mg5DatZNCS 00:19:31.210 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:31.468 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:31.726 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.kfisWWiByd 00:19:31.726 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kfisWWiByd 00:19:31.726 14:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:31.984 [2024-07-10 14:37:44.100469] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.984 14:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:32.242 14:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:32.500 [2024-07-10 14:37:44.592563] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:32.500 [2024-07-10 14:37:44.592785] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.500 14:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:32.795 malloc0 00:19:32.795 14:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:33.056 14:37:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kfisWWiByd 00:19:33.314 [2024-07-10 14:37:45.411324] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:33.314 14:37:45 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.kfisWWiByd 00:19:45.509 Initializing NVMe Controllers 00:19:45.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:45.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:45.509 Initialization complete. Launching workers. 00:19:45.509 ======================================================== 00:19:45.509 Latency(us) 00:19:45.509 Device Information : IOPS MiB/s Average min max 00:19:45.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9475.78 37.01 6755.70 1341.63 12854.80 00:19:45.509 ======================================================== 00:19:45.509 Total : 9475.78 37.01 6755.70 1341.63 12854.80 00:19:45.509 00:19:45.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kfisWWiByd 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kfisWWiByd' 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100413 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100413 /var/tmp/bdevperf.sock 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100413 ']' 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.509 [2024-07-10 14:37:55.698145] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:19:45.509 [2024-07-10 14:37:55.698245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100413 ] 00:19:45.509 [2024-07-10 14:37:55.820347] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:45.509 [2024-07-10 14:37:55.838256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.509 [2024-07-10 14:37:55.880594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:45.509 14:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kfisWWiByd 00:19:45.509 [2024-07-10 14:37:56.201173] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.509 [2024-07-10 14:37:56.201297] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:45.509 TLSTESTn1 00:19:45.509 14:37:56 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:45.509 Running I/O for 10 seconds... 00:19:55.479 00:19:55.479 Latency(us) 00:19:55.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.479 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:55.479 Verification LBA range: start 0x0 length 0x2000 00:19:55.479 TLSTESTn1 : 10.02 3797.97 14.84 0.00 0.00 33636.85 6940.86 31933.91 00:19:55.479 =================================================================================================================== 00:19:55.479 Total : 3797.97 14.84 0.00 0.00 33636.85 6940.86 31933.91 00:19:55.479 0 00:19:55.479 14:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:55.479 14:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 100413 00:19:55.479 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100413 ']' 00:19:55.479 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100413 00:19:55.479 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:55.479 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.479 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100413 00:19:55.479 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:55.480 killing process with pid 100413 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100413' 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100413 00:19:55.480 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.480 00:19:55.480 Latency(us) 00:19:55.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.480 =================================================================================================================== 00:19:55.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.480 [2024-07-10 14:38:06.449788] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100413 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mg5DatZNCS 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mg5DatZNCS 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mg5DatZNCS 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mg5DatZNCS' 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100546 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100546 /var/tmp/bdevperf.sock 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100546 ']' 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.480 [2024-07-10 14:38:06.640985] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:19:55.480 [2024-07-10 14:38:06.641078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100546 ] 00:19:55.480 [2024-07-10 14:38:06.759526] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:55.480 [2024-07-10 14:38:06.778027] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.480 [2024-07-10 14:38:06.816867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:55.480 14:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mg5DatZNCS 00:19:55.480 [2024-07-10 14:38:07.137508] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.480 [2024-07-10 14:38:07.137694] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:55.480 [2024-07-10 14:38:07.148037] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spd[2024-07-10 14:38:07.148041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb0c10 (107):k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:55.480 Transport endpoint is not connected 00:19:55.480 [2024-07-10 14:38:07.149020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb0c10 (9): Bad file descriptor 00:19:55.480 [2024-07-10 14:38:07.150015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:55.480 [2024-07-10 14:38:07.150067] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:55.480 [2024-07-10 14:38:07.150098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:55.480 2024/07/10 14:38:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.mg5DatZNCS subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:55.480 request: 00:19:55.480 { 00:19:55.480 "method": "bdev_nvme_attach_controller", 00:19:55.480 "params": { 00:19:55.480 "name": "TLSTEST", 00:19:55.480 "trtype": "tcp", 00:19:55.480 "traddr": "10.0.0.2", 00:19:55.480 "adrfam": "ipv4", 00:19:55.480 "trsvcid": "4420", 00:19:55.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.480 "prchk_reftag": false, 00:19:55.480 "prchk_guard": false, 00:19:55.480 "hdgst": false, 00:19:55.480 "ddgst": false, 00:19:55.480 "psk": "/tmp/tmp.mg5DatZNCS" 00:19:55.480 } 00:19:55.480 } 00:19:55.480 Got JSON-RPC error response 00:19:55.480 GoRPCClient: error on JSON-RPC call 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100546 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100546 ']' 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100546 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100546 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:55.480 killing process with pid 100546 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100546' 00:19:55.480 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.480 00:19:55.480 Latency(us) 00:19:55.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.480 =================================================================================================================== 00:19:55.480 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100546 00:19:55.480 [2024-07-10 14:38:07.187644] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100546 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kfisWWiByd 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kfisWWiByd 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kfisWWiByd 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kfisWWiByd' 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100578 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100578 /var/tmp/bdevperf.sock 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100578 ']' 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.480 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.480 [2024-07-10 14:38:07.397762] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:19:55.480 [2024-07-10 14:38:07.397857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100578 ] 00:19:55.480 [2024-07-10 14:38:07.517130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:55.480 [2024-07-10 14:38:07.535315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.480 [2024-07-10 14:38:07.571631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.481 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.481 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:55.481 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.kfisWWiByd 00:19:55.739 [2024-07-10 14:38:07.883396] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.739 [2024-07-10 14:38:07.883511] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:55.739 [2024-07-10 14:38:07.888384] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:55.739 [2024-07-10 14:38:07.888420] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:55.739 [2024-07-10 14:38:07.888474] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:55.739 [2024-07-10 14:38:07.889076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x629c10 (107): Transport endpoint is not connected 00:19:55.739 [2024-07-10 14:38:07.890062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x629c10 (9): Bad file descriptor 00:19:55.739 [2024-07-10 14:38:07.891058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:55.739 [2024-07-10 14:38:07.891083] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:55.739 [2024-07-10 14:38:07.891097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:55.739 2024/07/10 14:38:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.kfisWWiByd subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:55.739 request: 00:19:55.739 { 00:19:55.739 "method": "bdev_nvme_attach_controller", 00:19:55.739 "params": { 00:19:55.739 "name": "TLSTEST", 00:19:55.739 "trtype": "tcp", 00:19:55.739 "traddr": "10.0.0.2", 00:19:55.739 "adrfam": "ipv4", 00:19:55.739 "trsvcid": "4420", 00:19:55.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.739 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:55.739 "prchk_reftag": false, 00:19:55.739 "prchk_guard": false, 00:19:55.739 "hdgst": false, 00:19:55.739 "ddgst": false, 00:19:55.739 "psk": "/tmp/tmp.kfisWWiByd" 00:19:55.739 } 00:19:55.739 } 00:19:55.739 Got JSON-RPC error response 00:19:55.739 GoRPCClient: error on JSON-RPC call 00:19:55.739 14:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100578 00:19:55.739 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100578 ']' 00:19:55.739 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100578 00:19:55.739 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:55.739 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.739 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100578 00:19:55.739 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:55.739 killing process with pid 100578 00:19:55.739 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:55.739 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100578' 00:19:55.739 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100578 00:19:55.739 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.739 00:19:55.739 Latency(us) 00:19:55.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.739 =================================================================================================================== 00:19:55.739 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:55.739 [2024-07-10 14:38:07.937436] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:55.739 14:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100578 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kfisWWiByd 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kfisWWiByd 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kfisWWiByd 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kfisWWiByd' 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100603 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100603 /var/tmp/bdevperf.sock 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100603 ']' 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.998 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.998 [2024-07-10 14:38:08.128225] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:19:55.998 [2024-07-10 14:38:08.128330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100603 ] 00:19:55.998 [2024-07-10 14:38:08.247165] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:55.998 [2024-07-10 14:38:08.266571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.256 [2024-07-10 14:38:08.303145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.256 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.256 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:56.256 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kfisWWiByd 00:19:56.515 [2024-07-10 14:38:08.647419] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.515 [2024-07-10 14:38:08.647533] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:56.515 [2024-07-10 14:38:08.657674] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:56.515 [2024-07-10 14:38:08.657718] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:56.515 [2024-07-10 14:38:08.657778] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:56.515 [2024-07-10 14:38:08.658087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f67c10 (107): Transport endpoint is not connected 00:19:56.515 [2024-07-10 14:38:08.659077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f67c10 (9): Bad file descriptor 00:19:56.515 [2024-07-10 14:38:08.660074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:56.515 [2024-07-10 14:38:08.660100] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:56.515 [2024-07-10 14:38:08.660115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:56.515 2024/07/10 14:38:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.kfisWWiByd subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:56.515 request: 00:19:56.515 { 00:19:56.515 "method": "bdev_nvme_attach_controller", 00:19:56.515 "params": { 00:19:56.515 "name": "TLSTEST", 00:19:56.515 "trtype": "tcp", 00:19:56.515 "traddr": "10.0.0.2", 00:19:56.515 "adrfam": "ipv4", 00:19:56.515 "trsvcid": "4420", 00:19:56.515 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:56.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.515 "prchk_reftag": false, 00:19:56.515 "prchk_guard": false, 00:19:56.515 "hdgst": false, 00:19:56.515 "ddgst": false, 00:19:56.515 "psk": "/tmp/tmp.kfisWWiByd" 00:19:56.515 } 00:19:56.515 } 00:19:56.515 Got JSON-RPC error response 00:19:56.515 GoRPCClient: error on JSON-RPC call 00:19:56.515 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100603 00:19:56.515 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100603 ']' 00:19:56.515 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100603 00:19:56.515 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:56.515 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:56.515 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100603 00:19:56.515 killing process with pid 100603 00:19:56.515 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.515 00:19:56.515 Latency(us) 00:19:56.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.515 =================================================================================================================== 00:19:56.515 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:56.515 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:56.515 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:56.515 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100603' 00:19:56.515 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100603 00:19:56.515 [2024-07-10 14:38:08.713234] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:56.515 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100603 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100631 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100631 /var/tmp/bdevperf.sock 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100631 ']' 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.813 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.814 14:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.814 [2024-07-10 14:38:08.896261] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:19:56.814 [2024-07-10 14:38:08.896367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100631 ] 00:19:56.814 [2024-07-10 14:38:09.014566] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:56.814 [2024-07-10 14:38:09.031744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.814 [2024-07-10 14:38:09.068187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.072 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.072 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:57.072 14:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:57.332 [2024-07-10 14:38:09.377771] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:57.332 [2024-07-10 14:38:09.379692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178fbc0 (9): Bad file descriptor 00:19:57.332 [2024-07-10 14:38:09.380686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:57.332 [2024-07-10 14:38:09.380712] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:57.332 [2024-07-10 14:38:09.380727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.332 2024/07/10 14:38:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:57.332 request: 00:19:57.332 { 00:19:57.332 "method": "bdev_nvme_attach_controller", 00:19:57.332 "params": { 00:19:57.332 "name": "TLSTEST", 00:19:57.332 "trtype": "tcp", 00:19:57.332 "traddr": "10.0.0.2", 00:19:57.332 "adrfam": "ipv4", 00:19:57.332 "trsvcid": "4420", 00:19:57.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.332 "prchk_reftag": false, 00:19:57.332 "prchk_guard": false, 00:19:57.332 "hdgst": false, 00:19:57.332 "ddgst": false 00:19:57.332 } 00:19:57.332 } 00:19:57.332 Got JSON-RPC error response 00:19:57.332 GoRPCClient: error on JSON-RPC call 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100631 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100631 ']' 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100631 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100631 00:19:57.332 killing process with pid 100631 00:19:57.332 Received shutdown signal, test time was about 10.000000 seconds 00:19:57.332 00:19:57.332 Latency(us) 00:19:57.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.332 =================================================================================================================== 00:19:57.332 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100631' 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100631 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100631 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 100076 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100076 ']' 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100076 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100076 00:19:57.332 killing process with pid 100076 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100076' 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100076 00:19:57.332 [2024-07-10 14:38:09.589236] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:57.332 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100076 00:19:57.591 14:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:57.591 14:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:57.591 14:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.6xvUcQqoIH 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.6xvUcQqoIH 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100673 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100673 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100673 ']' 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.592 14:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.592 [2024-07-10 14:38:09.855795] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:19:57.592 [2024-07-10 14:38:09.855886] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.849 [2024-07-10 14:38:09.976878] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:57.849 [2024-07-10 14:38:09.988854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.849 [2024-07-10 14:38:10.022874] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.849 [2024-07-10 14:38:10.022926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.849 [2024-07-10 14:38:10.022937] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.849 [2024-07-10 14:38:10.022945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.849 [2024-07-10 14:38:10.022952] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.849 [2024-07-10 14:38:10.022976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.849 14:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.849 14:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:57.849 14:38:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:57.849 14:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:57.849 14:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.849 14:38:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.849 14:38:10 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.6xvUcQqoIH 00:19:57.849 14:38:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6xvUcQqoIH 00:19:57.849 14:38:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:58.415 [2024-07-10 14:38:10.445881] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.415 14:38:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:58.674 14:38:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:58.674 [2024-07-10 14:38:10.946003] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:58.674 [2024-07-10 14:38:10.946215] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.932 14:38:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:58.932 malloc0 00:19:58.932 14:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:59.190 14:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6xvUcQqoIH 00:19:59.449 [2024-07-10 14:38:11.704922] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6xvUcQqoIH 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6xvUcQqoIH' 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100761 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100761 /var/tmp/bdevperf.sock 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100761 ']' 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.449 14:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.708 [2024-07-10 14:38:11.781100] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:19:59.708 [2024-07-10 14:38:11.781198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100761 ] 00:19:59.708 [2024-07-10 14:38:11.902916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:59.708 [2024-07-10 14:38:11.918789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.708 [2024-07-10 14:38:11.955938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.967 14:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.967 14:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:59.967 14:38:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6xvUcQqoIH 00:20:00.224 [2024-07-10 14:38:12.337244] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.224 [2024-07-10 14:38:12.337408] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:00.224 TLSTESTn1 00:20:00.224 14:38:12 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:00.483 Running I/O for 10 seconds... 00:20:10.470 00:20:10.470 Latency(us) 00:20:10.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.470 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:10.470 Verification LBA range: start 0x0 length 0x2000 00:20:10.470 TLSTESTn1 : 10.02 3812.65 14.89 0.00 0.00 33506.58 6374.87 31218.97 00:20:10.470 =================================================================================================================== 00:20:10.470 Total : 3812.65 14.89 0.00 0.00 33506.58 6374.87 31218.97 00:20:10.470 0 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 100761 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100761 ']' 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100761 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100761 00:20:10.470 killing process with pid 100761 00:20:10.470 Received shutdown signal, test time was about 10.000000 seconds 00:20:10.470 00:20:10.470 Latency(us) 00:20:10.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.470 =================================================================================================================== 00:20:10.470 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100761' 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100761 00:20:10.470 [2024-07-10 14:38:22.612035] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100761 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.6xvUcQqoIH 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6xvUcQqoIH 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6xvUcQqoIH 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6xvUcQqoIH 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6xvUcQqoIH' 00:20:10.470 14:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:10.729 14:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100892 00:20:10.729 14:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:10.729 14:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:10.729 14:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100892 /var/tmp/bdevperf.sock 00:20:10.729 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100892 ']' 00:20:10.729 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.729 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.729 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.729 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.729 14:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.729 [2024-07-10 14:38:22.833250] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:10.729 [2024-07-10 14:38:22.833394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100892 ] 00:20:10.729 [2024-07-10 14:38:22.961371] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:10.729 [2024-07-10 14:38:22.977678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.729 [2024-07-10 14:38:23.014687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.986 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.986 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:10.986 14:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6xvUcQqoIH 00:20:11.245 [2024-07-10 14:38:23.339651] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.245 [2024-07-10 14:38:23.339753] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:11.245 [2024-07-10 14:38:23.339773] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.6xvUcQqoIH 00:20:11.245 2024/07/10 14:38:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.6xvUcQqoIH subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:20:11.245 request: 00:20:11.245 { 00:20:11.245 "method": "bdev_nvme_attach_controller", 00:20:11.245 "params": { 00:20:11.245 "name": "TLSTEST", 00:20:11.245 "trtype": "tcp", 00:20:11.245 "traddr": "10.0.0.2", 00:20:11.245 "adrfam": "ipv4", 00:20:11.245 "trsvcid": "4420", 00:20:11.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.245 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.245 "prchk_reftag": false, 00:20:11.245 "prchk_guard": false, 00:20:11.245 "hdgst": false, 00:20:11.245 "ddgst": false, 00:20:11.245 "psk": "/tmp/tmp.6xvUcQqoIH" 00:20:11.245 } 00:20:11.245 } 00:20:11.245 Got JSON-RPC error response 00:20:11.245 GoRPCClient: error on JSON-RPC call 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100892 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100892 ']' 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100892 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100892 00:20:11.245 killing process with pid 100892 00:20:11.245 Received shutdown signal, test time was about 10.000000 seconds 00:20:11.245 00:20:11.245 Latency(us) 00:20:11.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.245 =================================================================================================================== 00:20:11.245 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100892' 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100892 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100892 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 100673 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100673 ']' 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100673 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:11.245 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100673 00:20:11.503 killing process with pid 100673 00:20:11.503 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:11.503 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:11.503 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100673' 00:20:11.503 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100673 00:20:11.503 [2024-07-10 14:38:23.545923] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:11.503 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100673 00:20:11.503 14:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:11.503 14:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.503 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.503 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.503 14:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100928 00:20:11.503 14:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100928 00:20:11.503 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100928 ']' 00:20:11.504 14:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:11.504 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.504 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.504 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.504 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.504 14:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.504 [2024-07-10 14:38:23.753915] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:11.504 [2024-07-10 14:38:23.754027] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.762 [2024-07-10 14:38:23.876941] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:11.762 [2024-07-10 14:38:23.895413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.762 [2024-07-10 14:38:23.930007] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.762 [2024-07-10 14:38:23.930062] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.762 [2024-07-10 14:38:23.930075] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.762 [2024-07-10 14:38:23.930083] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.762 [2024-07-10 14:38:23.930090] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.762 [2024-07-10 14:38:23.930121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.762 14:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.762 14:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:11.762 14:38:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.762 14:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.762 14:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.762 14:38:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.762 14:38:24 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.6xvUcQqoIH 00:20:11.762 14:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:11.762 14:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.6xvUcQqoIH 00:20:11.762 14:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:11.762 14:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:11.763 14:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:11.763 14:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:11.763 14:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.6xvUcQqoIH 00:20:11.763 14:38:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6xvUcQqoIH 00:20:11.763 14:38:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:12.021 [2024-07-10 14:38:24.305355] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.280 14:38:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:12.538 14:38:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:12.796 [2024-07-10 14:38:24.889443] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.796 [2024-07-10 14:38:24.889654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.796 14:38:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:13.055 malloc0 00:20:13.055 14:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:13.314 14:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6xvUcQqoIH 00:20:13.573 [2024-07-10 14:38:25.680505] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:13.573 [2024-07-10 14:38:25.680549] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:13.573 [2024-07-10 14:38:25.680584] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:13.573 2024/07/10 14:38:25 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.6xvUcQqoIH], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:20:13.573 request: 00:20:13.573 { 00:20:13.573 "method": "nvmf_subsystem_add_host", 00:20:13.573 "params": { 00:20:13.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.573 "host": "nqn.2016-06.io.spdk:host1", 00:20:13.573 "psk": "/tmp/tmp.6xvUcQqoIH" 00:20:13.573 } 00:20:13.573 } 00:20:13.573 Got JSON-RPC error response 00:20:13.573 GoRPCClient: error on JSON-RPC call 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 100928 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100928 ']' 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100928 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100928 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:13.573 killing process with pid 100928 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100928' 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100928 00:20:13.573 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100928 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.6xvUcQqoIH 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101025 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101025 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101025 ']' 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.832 14:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.832 [2024-07-10 14:38:25.934456] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:13.832 [2024-07-10 14:38:25.934549] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.832 [2024-07-10 14:38:26.052973] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:13.832 [2024-07-10 14:38:26.067663] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.832 [2024-07-10 14:38:26.102696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.832 [2024-07-10 14:38:26.102749] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.832 [2024-07-10 14:38:26.102760] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.832 [2024-07-10 14:38:26.102768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.832 [2024-07-10 14:38:26.102775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.832 [2024-07-10 14:38:26.102800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.091 14:38:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.091 14:38:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:14.091 14:38:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:14.091 14:38:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.091 14:38:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.091 14:38:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.091 14:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.6xvUcQqoIH 00:20:14.091 14:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6xvUcQqoIH 00:20:14.091 14:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.350 [2024-07-10 14:38:26.509749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.350 14:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:14.608 14:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:14.867 [2024-07-10 14:38:26.993837] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.867 [2024-07-10 14:38:26.994050] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.867 14:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:15.126 malloc0 00:20:15.126 14:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.384 14:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6xvUcQqoIH 00:20:15.642 [2024-07-10 14:38:27.828591] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:15.642 14:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=101113 00:20:15.642 14:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.642 14:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.642 14:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 101113 /var/tmp/bdevperf.sock 00:20:15.642 14:38:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101113 ']' 00:20:15.642 14:38:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.642 14:38:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.642 14:38:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.642 14:38:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.642 14:38:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.642 [2024-07-10 14:38:27.898204] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:15.642 [2024-07-10 14:38:27.898351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101113 ] 00:20:15.900 [2024-07-10 14:38:28.017572] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:15.900 [2024-07-10 14:38:28.030788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.900 [2024-07-10 14:38:28.068125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.900 14:38:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.900 14:38:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:15.900 14:38:28 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6xvUcQqoIH 00:20:16.158 [2024-07-10 14:38:28.437686] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.158 [2024-07-10 14:38:28.437842] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:16.416 TLSTESTn1 00:20:16.416 14:38:28 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:16.674 14:38:28 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:16.674 "subsystems": [ 00:20:16.674 { 00:20:16.674 "subsystem": "keyring", 00:20:16.674 "config": [] 00:20:16.674 }, 00:20:16.674 { 00:20:16.674 "subsystem": "iobuf", 00:20:16.674 "config": [ 00:20:16.674 { 00:20:16.674 "method": "iobuf_set_options", 00:20:16.674 "params": { 00:20:16.674 "large_bufsize": 135168, 00:20:16.674 "large_pool_count": 1024, 00:20:16.674 "small_bufsize": 8192, 00:20:16.674 "small_pool_count": 8192 00:20:16.674 } 00:20:16.674 } 00:20:16.674 ] 00:20:16.674 }, 00:20:16.674 { 00:20:16.674 "subsystem": "sock", 00:20:16.674 "config": [ 00:20:16.674 { 00:20:16.674 "method": "sock_set_default_impl", 00:20:16.674 "params": { 00:20:16.674 "impl_name": "posix" 00:20:16.674 } 00:20:16.674 }, 00:20:16.674 { 00:20:16.674 "method": "sock_impl_set_options", 00:20:16.674 "params": { 00:20:16.674 "enable_ktls": false, 00:20:16.674 "enable_placement_id": 0, 00:20:16.674 "enable_quickack": false, 00:20:16.674 "enable_recv_pipe": true, 00:20:16.674 "enable_zerocopy_send_client": false, 00:20:16.674 "enable_zerocopy_send_server": true, 00:20:16.674 "impl_name": "ssl", 00:20:16.674 "recv_buf_size": 4096, 00:20:16.674 "send_buf_size": 4096, 00:20:16.674 "tls_version": 0, 00:20:16.674 "zerocopy_threshold": 0 00:20:16.674 } 00:20:16.674 }, 00:20:16.674 { 00:20:16.674 "method": "sock_impl_set_options", 00:20:16.674 "params": { 00:20:16.674 "enable_ktls": false, 00:20:16.674 "enable_placement_id": 0, 00:20:16.674 "enable_quickack": false, 00:20:16.674 "enable_recv_pipe": true, 00:20:16.674 "enable_zerocopy_send_client": false, 00:20:16.674 "enable_zerocopy_send_server": true, 00:20:16.674 "impl_name": "posix", 00:20:16.674 "recv_buf_size": 2097152, 00:20:16.674 "send_buf_size": 2097152, 00:20:16.674 "tls_version": 0, 00:20:16.674 "zerocopy_threshold": 0 00:20:16.674 } 00:20:16.674 } 00:20:16.674 ] 00:20:16.674 }, 00:20:16.674 { 00:20:16.674 "subsystem": "vmd", 00:20:16.674 "config": [] 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "subsystem": "accel", 00:20:16.675 "config": [ 00:20:16.675 { 00:20:16.675 "method": "accel_set_options", 00:20:16.675 "params": { 00:20:16.675 "buf_count": 2048, 00:20:16.675 "large_cache_size": 16, 00:20:16.675 "sequence_count": 2048, 00:20:16.675 "small_cache_size": 128, 00:20:16.675 "task_count": 2048 00:20:16.675 } 00:20:16.675 } 00:20:16.675 ] 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "subsystem": "bdev", 00:20:16.675 "config": [ 00:20:16.675 { 00:20:16.675 "method": "bdev_set_options", 00:20:16.675 "params": { 00:20:16.675 "bdev_auto_examine": true, 00:20:16.675 "bdev_io_cache_size": 256, 00:20:16.675 "bdev_io_pool_size": 65535, 00:20:16.675 "iobuf_large_cache_size": 16, 00:20:16.675 "iobuf_small_cache_size": 128 00:20:16.675 } 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "method": "bdev_raid_set_options", 00:20:16.675 "params": { 00:20:16.675 "process_window_size_kb": 1024 00:20:16.675 } 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "method": "bdev_iscsi_set_options", 00:20:16.675 "params": { 00:20:16.675 "timeout_sec": 30 00:20:16.675 } 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "method": "bdev_nvme_set_options", 00:20:16.675 "params": { 00:20:16.675 "action_on_timeout": "none", 00:20:16.675 "allow_accel_sequence": false, 00:20:16.675 "arbitration_burst": 0, 00:20:16.675 "bdev_retry_count": 3, 00:20:16.675 "ctrlr_loss_timeout_sec": 0, 00:20:16.675 "delay_cmd_submit": true, 00:20:16.675 "dhchap_dhgroups": [ 00:20:16.675 "null", 00:20:16.675 "ffdhe2048", 00:20:16.675 "ffdhe3072", 00:20:16.675 "ffdhe4096", 00:20:16.675 "ffdhe6144", 00:20:16.675 "ffdhe8192" 00:20:16.675 ], 00:20:16.675 "dhchap_digests": [ 00:20:16.675 "sha256", 00:20:16.675 "sha384", 00:20:16.675 "sha512" 00:20:16.675 ], 00:20:16.675 "disable_auto_failback": false, 00:20:16.675 "fast_io_fail_timeout_sec": 0, 00:20:16.675 "generate_uuids": false, 00:20:16.675 "high_priority_weight": 0, 00:20:16.675 "io_path_stat": false, 00:20:16.675 "io_queue_requests": 0, 00:20:16.675 "keep_alive_timeout_ms": 10000, 00:20:16.675 "low_priority_weight": 0, 00:20:16.675 "medium_priority_weight": 0, 00:20:16.675 "nvme_adminq_poll_period_us": 10000, 00:20:16.675 "nvme_error_stat": false, 00:20:16.675 "nvme_ioq_poll_period_us": 0, 00:20:16.675 "rdma_cm_event_timeout_ms": 0, 00:20:16.675 "rdma_max_cq_size": 0, 00:20:16.675 "rdma_srq_size": 0, 00:20:16.675 "reconnect_delay_sec": 0, 00:20:16.675 "timeout_admin_us": 0, 00:20:16.675 "timeout_us": 0, 00:20:16.675 "transport_ack_timeout": 0, 00:20:16.675 "transport_retry_count": 4, 00:20:16.675 "transport_tos": 0 00:20:16.675 } 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "method": "bdev_nvme_set_hotplug", 00:20:16.675 "params": { 00:20:16.675 "enable": false, 00:20:16.675 "period_us": 100000 00:20:16.675 } 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "method": "bdev_malloc_create", 00:20:16.675 "params": { 00:20:16.675 "block_size": 4096, 00:20:16.675 "name": "malloc0", 00:20:16.675 "num_blocks": 8192, 00:20:16.675 "optimal_io_boundary": 0, 00:20:16.675 "physical_block_size": 4096, 00:20:16.675 "uuid": "f5a72fd8-375b-4080-b286-57cb50133918" 00:20:16.675 } 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "method": "bdev_wait_for_examine" 00:20:16.675 } 00:20:16.675 ] 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "subsystem": "nbd", 00:20:16.675 "config": [] 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "subsystem": "scheduler", 00:20:16.675 "config": [ 00:20:16.675 { 00:20:16.675 "method": "framework_set_scheduler", 00:20:16.675 "params": { 00:20:16.675 "name": "static" 00:20:16.675 } 00:20:16.675 } 00:20:16.675 ] 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "subsystem": "nvmf", 00:20:16.675 "config": [ 00:20:16.675 { 00:20:16.675 "method": "nvmf_set_config", 00:20:16.675 "params": { 00:20:16.675 "admin_cmd_passthru": { 00:20:16.675 "identify_ctrlr": false 00:20:16.675 }, 00:20:16.675 "discovery_filter": "match_any" 00:20:16.675 } 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "method": "nvmf_set_max_subsystems", 00:20:16.675 "params": { 00:20:16.675 "max_subsystems": 1024 00:20:16.675 } 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "method": "nvmf_set_crdt", 00:20:16.675 "params": { 00:20:16.675 "crdt1": 0, 00:20:16.675 "crdt2": 0, 00:20:16.675 "crdt3": 0 00:20:16.675 } 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "method": "nvmf_create_transport", 00:20:16.675 "params": { 00:20:16.675 "abort_timeout_sec": 1, 00:20:16.675 "ack_timeout": 0, 00:20:16.675 "buf_cache_size": 4294967295, 00:20:16.675 "c2h_success": false, 00:20:16.675 "data_wr_pool_size": 0, 00:20:16.675 "dif_insert_or_strip": false, 00:20:16.675 "in_capsule_data_size": 4096, 00:20:16.675 "io_unit_size": 131072, 00:20:16.675 "max_aq_depth": 128, 00:20:16.675 "max_io_qpairs_per_ctrlr": 127, 00:20:16.675 "max_io_size": 131072, 00:20:16.675 "max_queue_depth": 128, 00:20:16.675 "num_shared_buffers": 511, 00:20:16.675 "sock_priority": 0, 00:20:16.675 "trtype": "TCP", 00:20:16.675 "zcopy": false 00:20:16.675 } 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "method": "nvmf_create_subsystem", 00:20:16.675 "params": { 00:20:16.675 "allow_any_host": false, 00:20:16.675 "ana_reporting": false, 00:20:16.675 "max_cntlid": 65519, 00:20:16.675 "max_namespaces": 10, 00:20:16.675 "min_cntlid": 1, 00:20:16.675 "model_number": "SPDK bdev Controller", 00:20:16.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.675 "serial_number": "SPDK00000000000001" 00:20:16.675 } 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "method": "nvmf_subsystem_add_host", 00:20:16.675 "params": { 00:20:16.675 "host": "nqn.2016-06.io.spdk:host1", 00:20:16.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.675 "psk": "/tmp/tmp.6xvUcQqoIH" 00:20:16.675 } 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "method": "nvmf_subsystem_add_ns", 00:20:16.675 "params": { 00:20:16.675 "namespace": { 00:20:16.675 "bdev_name": "malloc0", 00:20:16.675 "nguid": "F5A72FD8375B4080B28657CB50133918", 00:20:16.675 "no_auto_visible": false, 00:20:16.675 "nsid": 1, 00:20:16.675 "uuid": "f5a72fd8-375b-4080-b286-57cb50133918" 00:20:16.675 }, 00:20:16.675 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:16.675 } 00:20:16.675 }, 00:20:16.675 { 00:20:16.675 "method": "nvmf_subsystem_add_listener", 00:20:16.675 "params": { 00:20:16.675 "listen_address": { 00:20:16.675 "adrfam": "IPv4", 00:20:16.675 "traddr": "10.0.0.2", 00:20:16.675 "trsvcid": "4420", 00:20:16.675 "trtype": "TCP" 00:20:16.675 }, 00:20:16.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.675 "secure_channel": true 00:20:16.675 } 00:20:16.675 } 00:20:16.675 ] 00:20:16.675 } 00:20:16.675 ] 00:20:16.675 }' 00:20:16.675 14:38:28 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:17.242 14:38:29 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:17.242 "subsystems": [ 00:20:17.242 { 00:20:17.242 "subsystem": "keyring", 00:20:17.242 "config": [] 00:20:17.242 }, 00:20:17.242 { 00:20:17.242 "subsystem": "iobuf", 00:20:17.242 "config": [ 00:20:17.242 { 00:20:17.242 "method": "iobuf_set_options", 00:20:17.242 "params": { 00:20:17.242 "large_bufsize": 135168, 00:20:17.242 "large_pool_count": 1024, 00:20:17.242 "small_bufsize": 8192, 00:20:17.242 "small_pool_count": 8192 00:20:17.242 } 00:20:17.242 } 00:20:17.242 ] 00:20:17.242 }, 00:20:17.242 { 00:20:17.242 "subsystem": "sock", 00:20:17.242 "config": [ 00:20:17.242 { 00:20:17.242 "method": "sock_set_default_impl", 00:20:17.242 "params": { 00:20:17.242 "impl_name": "posix" 00:20:17.242 } 00:20:17.242 }, 00:20:17.242 { 00:20:17.242 "method": "sock_impl_set_options", 00:20:17.242 "params": { 00:20:17.242 "enable_ktls": false, 00:20:17.242 "enable_placement_id": 0, 00:20:17.242 "enable_quickack": false, 00:20:17.242 "enable_recv_pipe": true, 00:20:17.242 "enable_zerocopy_send_client": false, 00:20:17.242 "enable_zerocopy_send_server": true, 00:20:17.242 "impl_name": "ssl", 00:20:17.242 "recv_buf_size": 4096, 00:20:17.242 "send_buf_size": 4096, 00:20:17.242 "tls_version": 0, 00:20:17.242 "zerocopy_threshold": 0 00:20:17.242 } 00:20:17.242 }, 00:20:17.242 { 00:20:17.242 "method": "sock_impl_set_options", 00:20:17.242 "params": { 00:20:17.242 "enable_ktls": false, 00:20:17.242 "enable_placement_id": 0, 00:20:17.242 "enable_quickack": false, 00:20:17.242 "enable_recv_pipe": true, 00:20:17.242 "enable_zerocopy_send_client": false, 00:20:17.242 "enable_zerocopy_send_server": true, 00:20:17.242 "impl_name": "posix", 00:20:17.242 "recv_buf_size": 2097152, 00:20:17.242 "send_buf_size": 2097152, 00:20:17.242 "tls_version": 0, 00:20:17.243 "zerocopy_threshold": 0 00:20:17.243 } 00:20:17.243 } 00:20:17.243 ] 00:20:17.243 }, 00:20:17.243 { 00:20:17.243 "subsystem": "vmd", 00:20:17.243 "config": [] 00:20:17.243 }, 00:20:17.243 { 00:20:17.243 "subsystem": "accel", 00:20:17.243 "config": [ 00:20:17.243 { 00:20:17.243 "method": "accel_set_options", 00:20:17.243 "params": { 00:20:17.243 "buf_count": 2048, 00:20:17.243 "large_cache_size": 16, 00:20:17.243 "sequence_count": 2048, 00:20:17.243 "small_cache_size": 128, 00:20:17.243 "task_count": 2048 00:20:17.243 } 00:20:17.243 } 00:20:17.243 ] 00:20:17.243 }, 00:20:17.243 { 00:20:17.243 "subsystem": "bdev", 00:20:17.243 "config": [ 00:20:17.243 { 00:20:17.243 "method": "bdev_set_options", 00:20:17.243 "params": { 00:20:17.243 "bdev_auto_examine": true, 00:20:17.243 "bdev_io_cache_size": 256, 00:20:17.243 "bdev_io_pool_size": 65535, 00:20:17.243 "iobuf_large_cache_size": 16, 00:20:17.243 "iobuf_small_cache_size": 128 00:20:17.243 } 00:20:17.243 }, 00:20:17.243 { 00:20:17.243 "method": "bdev_raid_set_options", 00:20:17.243 "params": { 00:20:17.243 "process_window_size_kb": 1024 00:20:17.243 } 00:20:17.243 }, 00:20:17.243 { 00:20:17.243 "method": "bdev_iscsi_set_options", 00:20:17.243 "params": { 00:20:17.243 "timeout_sec": 30 00:20:17.243 } 00:20:17.243 }, 00:20:17.243 { 00:20:17.243 "method": "bdev_nvme_set_options", 00:20:17.243 "params": { 00:20:17.243 "action_on_timeout": "none", 00:20:17.243 "allow_accel_sequence": false, 00:20:17.243 "arbitration_burst": 0, 00:20:17.243 "bdev_retry_count": 3, 00:20:17.243 "ctrlr_loss_timeout_sec": 0, 00:20:17.243 "delay_cmd_submit": true, 00:20:17.243 "dhchap_dhgroups": [ 00:20:17.243 "null", 00:20:17.243 "ffdhe2048", 00:20:17.243 "ffdhe3072", 00:20:17.243 "ffdhe4096", 00:20:17.243 "ffdhe6144", 00:20:17.243 "ffdhe8192" 00:20:17.243 ], 00:20:17.243 "dhchap_digests": [ 00:20:17.243 "sha256", 00:20:17.243 "sha384", 00:20:17.243 "sha512" 00:20:17.243 ], 00:20:17.243 "disable_auto_failback": false, 00:20:17.243 "fast_io_fail_timeout_sec": 0, 00:20:17.243 "generate_uuids": false, 00:20:17.243 "high_priority_weight": 0, 00:20:17.243 "io_path_stat": false, 00:20:17.243 "io_queue_requests": 512, 00:20:17.243 "keep_alive_timeout_ms": 10000, 00:20:17.243 "low_priority_weight": 0, 00:20:17.243 "medium_priority_weight": 0, 00:20:17.243 "nvme_adminq_poll_period_us": 10000, 00:20:17.243 "nvme_error_stat": false, 00:20:17.243 "nvme_ioq_poll_period_us": 0, 00:20:17.243 "rdma_cm_event_timeout_ms": 0, 00:20:17.243 "rdma_max_cq_size": 0, 00:20:17.243 "rdma_srq_size": 0, 00:20:17.243 "reconnect_delay_sec": 0, 00:20:17.243 "timeout_admin_us": 0, 00:20:17.243 "timeout_us": 0, 00:20:17.243 "transport_ack_timeout": 0, 00:20:17.243 "transport_retry_count": 4, 00:20:17.243 "transport_tos": 0 00:20:17.243 } 00:20:17.243 }, 00:20:17.243 { 00:20:17.243 "method": "bdev_nvme_attach_controller", 00:20:17.243 "params": { 00:20:17.243 "adrfam": "IPv4", 00:20:17.243 "ctrlr_loss_timeout_sec": 0, 00:20:17.243 "ddgst": false, 00:20:17.243 "fast_io_fail_timeout_sec": 0, 00:20:17.243 "hdgst": false, 00:20:17.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.243 "name": "TLSTEST", 00:20:17.243 "prchk_guard": false, 00:20:17.243 "prchk_reftag": false, 00:20:17.243 "psk": "/tmp/tmp.6xvUcQqoIH", 00:20:17.243 "reconnect_delay_sec": 0, 00:20:17.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.243 "traddr": "10.0.0.2", 00:20:17.243 "trsvcid": "4420", 00:20:17.243 "trtype": "TCP" 00:20:17.243 } 00:20:17.243 }, 00:20:17.243 { 00:20:17.243 "method": "bdev_nvme_set_hotplug", 00:20:17.243 "params": { 00:20:17.243 "enable": false, 00:20:17.243 "period_us": 100000 00:20:17.243 } 00:20:17.243 }, 00:20:17.243 { 00:20:17.243 "method": "bdev_wait_for_examine" 00:20:17.243 } 00:20:17.243 ] 00:20:17.243 }, 00:20:17.243 { 00:20:17.243 "subsystem": "nbd", 00:20:17.243 "config": [] 00:20:17.243 } 00:20:17.243 ] 00:20:17.243 }' 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 101113 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101113 ']' 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101113 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101113 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:17.243 killing process with pid 101113 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101113' 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101113 00:20:17.243 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.243 00:20:17.243 Latency(us) 00:20:17.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.243 =================================================================================================================== 00:20:17.243 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:17.243 [2024-07-10 14:38:29.306845] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101113 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 101025 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101025 ']' 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101025 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101025 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:17.243 killing process with pid 101025 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101025' 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101025 00:20:17.243 [2024-07-10 14:38:29.470258] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:17.243 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101025 00:20:17.504 14:38:29 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:17.504 14:38:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.504 14:38:29 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:17.504 "subsystems": [ 00:20:17.504 { 00:20:17.504 "subsystem": "keyring", 00:20:17.504 "config": [] 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "subsystem": "iobuf", 00:20:17.504 "config": [ 00:20:17.504 { 00:20:17.504 "method": "iobuf_set_options", 00:20:17.504 "params": { 00:20:17.504 "large_bufsize": 135168, 00:20:17.504 "large_pool_count": 1024, 00:20:17.504 "small_bufsize": 8192, 00:20:17.504 "small_pool_count": 8192 00:20:17.504 } 00:20:17.504 } 00:20:17.504 ] 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "subsystem": "sock", 00:20:17.504 "config": [ 00:20:17.504 { 00:20:17.504 "method": "sock_set_default_impl", 00:20:17.504 "params": { 00:20:17.504 "impl_name": "posix" 00:20:17.504 } 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "method": "sock_impl_set_options", 00:20:17.504 "params": { 00:20:17.504 "enable_ktls": false, 00:20:17.504 "enable_placement_id": 0, 00:20:17.504 "enable_quickack": false, 00:20:17.504 "enable_recv_pipe": true, 00:20:17.504 "enable_zerocopy_send_client": false, 00:20:17.504 "enable_zerocopy_send_server": true, 00:20:17.504 "impl_name": "ssl", 00:20:17.504 "recv_buf_size": 4096, 00:20:17.504 "send_buf_size": 4096, 00:20:17.504 "tls_version": 0, 00:20:17.504 "zerocopy_threshold": 0 00:20:17.504 } 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "method": "sock_impl_set_options", 00:20:17.504 "params": { 00:20:17.504 "enable_ktls": false, 00:20:17.504 "enable_placement_id": 0, 00:20:17.504 "enable_quickack": false, 00:20:17.504 "enable_recv_pipe": true, 00:20:17.504 "enable_zerocopy_send_client": false, 00:20:17.504 "enable_zerocopy_send_server": true, 00:20:17.504 "impl_name": "posix", 00:20:17.504 "recv_buf_size": 2097152, 00:20:17.504 "send_buf_size": 2097152, 00:20:17.504 "tls_version": 0, 00:20:17.504 "zerocopy_threshold": 0 00:20:17.504 } 00:20:17.504 } 00:20:17.504 ] 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "subsystem": "vmd", 00:20:17.504 "config": [] 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "subsystem": "accel", 00:20:17.504 "config": [ 00:20:17.504 { 00:20:17.504 "method": "accel_set_options", 00:20:17.504 "params": { 00:20:17.504 "buf_count": 2048, 00:20:17.504 "large_cache_size": 16, 00:20:17.504 "sequence_count": 2048, 00:20:17.504 "small_cache_size": 128, 00:20:17.504 "task_count": 2048 00:20:17.504 } 00:20:17.504 } 00:20:17.504 ] 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "subsystem": "bdev", 00:20:17.504 "config": [ 00:20:17.504 { 00:20:17.504 "method": "bdev_set_options", 00:20:17.504 "params": { 00:20:17.504 "bdev_auto_examine": true, 00:20:17.504 "bdev_io_cache_size": 256, 00:20:17.504 "bdev_io_pool_size": 65535, 00:20:17.504 "iobuf_large_cache_size": 16, 00:20:17.504 "iobuf_small_cache_size": 128 00:20:17.504 } 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "method": "bdev_raid_set_options", 00:20:17.504 "params": { 00:20:17.504 "process_window_size_kb": 1024 00:20:17.504 } 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "method": "bdev_iscsi_set_options", 00:20:17.504 "params": { 00:20:17.504 "timeout_sec": 30 00:20:17.504 } 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "method": "bdev_nvme_set_options", 00:20:17.504 "params": { 00:20:17.504 "action_on_timeout": "none", 00:20:17.504 "allow_accel_sequence": false, 00:20:17.504 "arbitration_burst": 0, 00:20:17.504 "bdev_retry_count": 3, 00:20:17.504 "ctrlr_loss_timeout_sec": 0, 00:20:17.504 "delay_cmd_submit": true, 00:20:17.504 "dhchap_dhgroups": [ 00:20:17.504 "null", 00:20:17.504 "ffdhe2048", 00:20:17.504 "ffdhe3072", 00:20:17.504 "ffdhe4096", 00:20:17.504 "ffdhe6144", 00:20:17.504 "ffdhe8192" 00:20:17.504 ], 00:20:17.504 "dhchap_digests": [ 00:20:17.504 "sha256", 00:20:17.504 "sha384", 00:20:17.504 "sha512" 00:20:17.504 ], 00:20:17.504 "disable_auto_failback": false, 00:20:17.504 "fast_io_fail_timeout_sec": 0, 00:20:17.504 "generate_uuids": false, 00:20:17.504 "high_priority_weight": 0, 00:20:17.504 "io_path_stat": false, 00:20:17.504 "io_queue_requests": 0, 00:20:17.504 "keep_alive_timeout_ms": 10000, 00:20:17.504 "low_priority_weight": 0, 00:20:17.504 "medium_priority_weight": 0, 00:20:17.504 "nvme_adminq_poll_period_us": 10000, 00:20:17.504 "nvme_error_stat": false, 00:20:17.504 "nvme_ioq_poll_period_us": 0, 00:20:17.504 "rdma_cm_event_timeout_ms": 0, 00:20:17.504 "rdma_max_cq_size": 0, 00:20:17.504 "rdma_srq_size": 0, 00:20:17.504 "reconnect_delay_sec": 0, 00:20:17.504 "timeout_admin_us": 0, 00:20:17.504 "timeout_us": 0, 00:20:17.504 "transport_ack_timeout": 0, 00:20:17.504 "transport_retry_count": 4, 00:20:17.504 "transport_tos": 0 00:20:17.504 } 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "method": "bdev_nvme_set_hotplug", 00:20:17.504 "params": { 00:20:17.504 "enable": false, 00:20:17.504 "period_us": 100000 00:20:17.504 } 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "method": "bdev_malloc_create", 00:20:17.504 "params": { 00:20:17.504 "block_size": 4096, 00:20:17.504 "name": "malloc0", 00:20:17.504 "num_blocks": 8192, 00:20:17.504 "optimal_io_boundary": 0, 00:20:17.504 "physical_block_size": 4096, 00:20:17.504 "uuid": "f5a72fd8-375b-4080-b286-57cb50133918" 00:20:17.504 } 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "method": "bdev_wait_for_examine" 00:20:17.504 } 00:20:17.504 ] 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "subsystem": "nbd", 00:20:17.504 "config": [] 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "subsystem": "scheduler", 00:20:17.504 "config": [ 00:20:17.504 { 00:20:17.504 "method": "framework_set_scheduler", 00:20:17.504 "params": { 00:20:17.504 "name": "static" 00:20:17.504 } 00:20:17.504 } 00:20:17.504 ] 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "subsystem": "nvmf", 00:20:17.504 "config": [ 00:20:17.504 { 00:20:17.504 "method": "nvmf_set_config", 00:20:17.504 "params": { 00:20:17.504 "admin_cmd_passthru": { 00:20:17.504 "identify_ctrlr": false 00:20:17.504 }, 00:20:17.504 "discovery_filter": "match_any" 00:20:17.504 } 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "method": "nvmf_set_max_subsystems", 00:20:17.504 "params": { 00:20:17.504 "max_subsystems": 1024 00:20:17.504 } 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "method": "nvmf_set_crdt", 00:20:17.504 "params": { 00:20:17.504 "crdt1": 0, 00:20:17.504 "crdt2": 0, 00:20:17.504 "crdt3": 0 00:20:17.504 } 00:20:17.504 }, 00:20:17.504 { 00:20:17.504 "method": "nvmf_create_transport", 00:20:17.504 "params": { 00:20:17.504 "abort_timeout_sec": 1, 00:20:17.504 "ack_timeout": 0, 00:20:17.504 "buf_cache_size": 4294967295, 00:20:17.504 "c2h_success": false, 00:20:17.504 "data_wr_pool_size": 0, 00:20:17.504 "dif_insert_or_strip": false, 00:20:17.504 "in_capsule_data_size": 4096, 00:20:17.504 "io_unit_size": 131072, 00:20:17.504 "max_aq_depth": 128, 00:20:17.504 "max_io_qpairs_per_ctrlr": 127, 00:20:17.504 "max_io_size": 131072, 00:20:17.505 "max_queue_depth": 128, 00:20:17.505 "num_shared_buffers": 511, 00:20:17.505 "sock_priority": 0, 00:20:17.505 "trtype": "TCP", 00:20:17.505 "zcopy": false 00:20:17.505 } 00:20:17.505 }, 00:20:17.505 { 00:20:17.505 "method": "nvmf_create_subsystem", 00:20:17.505 "params": { 00:20:17.505 "allow_any_host": false, 00:20:17.505 "ana_reporting": false, 00:20:17.505 "max_cntlid": 65519, 00:20:17.505 "max_namespaces": 10, 00:20:17.505 "min_cntlid": 1, 00:20:17.505 "model_number": "SPDK bdev Controller", 00:20:17.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.505 "serial_number": "SPDK00000000000001" 00:20:17.505 } 00:20:17.505 }, 00:20:17.505 { 00:20:17.505 "method": "nvmf_subsystem_add_host", 00:20:17.505 "params": { 00:20:17.505 "host": "nqn.2016-06.io.spdk:host1", 00:20:17.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.505 "psk": "/tmp/tmp.6xvUcQqoIH" 00:20:17.505 } 00:20:17.505 }, 00:20:17.505 { 00:20:17.505 "method": "nvmf_subsystem_add_ns", 00:20:17.505 "params": { 00:20:17.505 "namespace": { 00:20:17.505 "bdev_name": "malloc0", 00:20:17.505 "nguid": "F5A72FD8375B4080B28657CB50133918", 00:20:17.505 "no_auto_visible": false, 00:20:17.505 "nsid": 1, 00:20:17.505 "uuid": "f5a72fd8-375b-4080-b286-57cb50133918" 00:20:17.505 }, 00:20:17.505 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:17.505 } 00:20:17.505 }, 00:20:17.505 { 00:20:17.505 "method": "nvmf_subsystem_add_listener", 00:20:17.505 "params": { 00:20:17.505 "listen_address": { 00:20:17.505 "adrfam": "IPv4", 00:20:17.505 "traddr": "10.0.0.2", 00:20:17.505 "trsvcid": "4420", 00:20:17.505 "trtype": "TCP" 00:20:17.505 }, 00:20:17.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.505 "secure_channel": true 00:20:17.505 } 00:20:17.505 } 00:20:17.505 ] 00:20:17.505 } 00:20:17.505 ] 00:20:17.505 }' 00:20:17.505 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:17.505 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.505 14:38:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101168 00:20:17.505 14:38:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:17.505 14:38:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101168 00:20:17.505 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101168 ']' 00:20:17.505 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.505 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.505 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.505 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.505 14:38:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.505 [2024-07-10 14:38:29.693960] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:17.505 [2024-07-10 14:38:29.694094] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.764 [2024-07-10 14:38:29.823893] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:17.764 [2024-07-10 14:38:29.844875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.764 [2024-07-10 14:38:29.883982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.764 [2024-07-10 14:38:29.884036] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.764 [2024-07-10 14:38:29.884048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.764 [2024-07-10 14:38:29.884058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.764 [2024-07-10 14:38:29.884067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.764 [2024-07-10 14:38:29.884166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.022 [2024-07-10 14:38:30.069489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.022 [2024-07-10 14:38:30.085386] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:18.022 [2024-07-10 14:38:30.101387] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.022 [2024-07-10 14:38:30.101603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=101218 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 101218 /var/tmp/bdevperf.sock 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101218 ']' 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:18.590 14:38:30 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:18.590 "subsystems": [ 00:20:18.590 { 00:20:18.590 "subsystem": "keyring", 00:20:18.590 "config": [] 00:20:18.590 }, 00:20:18.590 { 00:20:18.590 "subsystem": "iobuf", 00:20:18.590 "config": [ 00:20:18.590 { 00:20:18.590 "method": "iobuf_set_options", 00:20:18.590 "params": { 00:20:18.590 "large_bufsize": 135168, 00:20:18.590 "large_pool_count": 1024, 00:20:18.590 "small_bufsize": 8192, 00:20:18.590 "small_pool_count": 8192 00:20:18.590 } 00:20:18.590 } 00:20:18.590 ] 00:20:18.590 }, 00:20:18.590 { 00:20:18.590 "subsystem": "sock", 00:20:18.590 "config": [ 00:20:18.590 { 00:20:18.590 "method": "sock_set_default_impl", 00:20:18.590 "params": { 00:20:18.590 "impl_name": "posix" 00:20:18.590 } 00:20:18.590 }, 00:20:18.590 { 00:20:18.590 "method": "sock_impl_set_options", 00:20:18.590 "params": { 00:20:18.590 "enable_ktls": false, 00:20:18.590 "enable_placement_id": 0, 00:20:18.590 "enable_quickack": false, 00:20:18.590 "enable_recv_pipe": true, 00:20:18.590 "enable_zerocopy_send_client": false, 00:20:18.590 "enable_zerocopy_send_server": true, 00:20:18.590 "impl_name": "ssl", 00:20:18.590 "recv_buf_size": 4096, 00:20:18.590 "send_buf_size": 4096, 00:20:18.590 "tls_version": 0, 00:20:18.590 "zerocopy_threshold": 0 00:20:18.590 } 00:20:18.590 }, 00:20:18.590 { 00:20:18.590 "method": "sock_impl_set_options", 00:20:18.590 "params": { 00:20:18.590 "enable_ktls": false, 00:20:18.590 "enable_placement_id": 0, 00:20:18.590 "enable_quickack": false, 00:20:18.590 "enable_recv_pipe": true, 00:20:18.590 "enable_zerocopy_send_client": false, 00:20:18.590 "enable_zerocopy_send_server": true, 00:20:18.590 "impl_name": "posix", 00:20:18.590 "recv_buf_size": 2097152, 00:20:18.590 "send_buf_size": 2097152, 00:20:18.590 "tls_version": 0, 00:20:18.590 "zerocopy_threshold": 0 00:20:18.590 } 00:20:18.590 } 00:20:18.590 ] 00:20:18.590 }, 00:20:18.590 { 00:20:18.590 "subsystem": "vmd", 00:20:18.590 "config": [] 00:20:18.590 }, 00:20:18.590 { 00:20:18.590 "subsystem": "accel", 00:20:18.590 "config": [ 00:20:18.590 { 00:20:18.590 "method": "accel_set_options", 00:20:18.590 "params": { 00:20:18.590 "buf_count": 2048, 00:20:18.590 "large_cache_size": 16, 00:20:18.590 "sequence_count": 2048, 00:20:18.590 "small_cache_size": 128, 00:20:18.590 "task_count": 2048 00:20:18.590 } 00:20:18.590 } 00:20:18.591 ] 00:20:18.591 }, 00:20:18.591 { 00:20:18.591 "subsystem": "bdev", 00:20:18.591 "config": [ 00:20:18.591 { 00:20:18.591 "method": "bdev_set_options", 00:20:18.591 "params": { 00:20:18.591 "bdev_auto_examine": true, 00:20:18.591 "bdev_io_cache_size": 256, 00:20:18.591 "bdev_io_pool_size": 65535, 00:20:18.591 "iobuf_large_cache_size": 16, 00:20:18.591 "iobuf_small_cache_size": 128 00:20:18.591 } 00:20:18.591 }, 00:20:18.591 { 00:20:18.591 "method": "bdev_raid_set_options", 00:20:18.591 "params": { 00:20:18.591 "process_window_size_kb": 1024 00:20:18.591 } 00:20:18.591 }, 00:20:18.591 { 00:20:18.591 "method": "bdev_iscsi_set_options", 00:20:18.591 "params": { 00:20:18.591 "timeout_sec": 30 00:20:18.591 } 00:20:18.591 }, 00:20:18.591 { 00:20:18.591 "method": "bdev_nvme_set_options", 00:20:18.591 "params": { 00:20:18.591 "action_on_timeout": "none", 00:20:18.591 "allow_accel_sequence": false, 00:20:18.591 "arbitration_burst": 0, 00:20:18.591 "bdev_retry_count": 3, 00:20:18.591 "ctrlr_loss_timeout_sec": 0, 00:20:18.591 "delay_cmd_submit": true, 00:20:18.591 "dhchap_dhgroups": [ 00:20:18.591 "null", 00:20:18.591 "ffdhe2048", 00:20:18.591 "ffdhe3072", 00:20:18.591 "ffdhe4096", 00:20:18.591 "ffdhe6144", 00:20:18.591 "ffdhe8192" 00:20:18.591 ], 00:20:18.591 "dhchap_digests": [ 00:20:18.591 "sha256", 00:20:18.591 "sha384", 00:20:18.591 "sha512" 00:20:18.591 ], 00:20:18.591 "disable_auto_failback": false, 00:20:18.591 "fast_io_fail_timeout_sec": 0, 00:20:18.591 "generate_uuids": false, 00:20:18.591 "high_priority_weight": 0, 00:20:18.591 "io_path_stat": false, 00:20:18.591 "io_queue_requests": 512, 00:20:18.591 "keep_alive_timeout_ms": 10000, 00:20:18.591 "low_priority_weight": 0, 00:20:18.591 "medium_priority_weight": 0, 00:20:18.591 "nvme_adminq_poll_period_us": 10000, 00:20:18.591 "nvme_error_stat": false, 00:20:18.591 "nvme_ioq_poll_period_us": 0, 00:20:18.591 "rdma_cm_event_timeout_ms": 0, 00:20:18.591 "rdma_max_cq_size": 0, 00:20:18.591 "rdma_srq_size": 0, 00:20:18.591 "reconnect_delay_sec": 0, 00:20:18.591 "timeout_admin_us": 0, 00:20:18.591 "timeout_us": 0, 00:20:18.591 "transport_ack_timeout": 0, 00:20:18.591 "transport_retry_count": 4, 00:20:18.591 "transport_tos": 0 00:20:18.591 } 00:20:18.591 }, 00:20:18.591 { 00:20:18.591 "method": "bdev_nvme_attach_controller", 00:20:18.591 "params": { 00:20:18.591 "adrfam": "IPv4", 00:20:18.591 "ctrlr_loss_timeout_sec": 0, 00:20:18.591 "ddgst": false, 00:20:18.591 "fast_io_fail_timeout_sec": 0, 00:20:18.591 "hdgst": false, 00:20:18.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.591 "name": "TLSTEST", 00:20:18.591 "prchk_guard": false, 00:20:18.591 "prchk_reftag": false, 00:20:18.591 "psk": "/tmp/tmp.6xvUcQqoIH", 00:20:18.591 "reconnect_delay_sec": 0, 00:20:18.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.591 "traddr": "10.0.0.2", 00:20:18.591 "trsvcid": "4420", 00:20:18.591 "trtype": "TCP" 00:20:18.591 } 00:20:18.591 }, 00:20:18.591 { 00:20:18.591 "method": "bdev_nvme_set_hotplug", 00:20:18.591 "params": { 00:20:18.591 "enable": false, 00:20:18.591 "period_us": 100000 00:20:18.591 } 00:20:18.591 }, 00:20:18.591 { 00:20:18.591 "method": "bdev_wait_for_examine" 00:20:18.591 } 00:20:18.591 ] 00:20:18.591 }, 00:20:18.591 { 00:20:18.591 "subsystem": "nbd", 00:20:18.591 "config": [] 00:20:18.591 } 00:20:18.591 ] 00:20:18.591 }' 00:20:18.591 [2024-07-10 14:38:30.801426] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:18.591 [2024-07-10 14:38:30.801531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101218 ] 00:20:18.850 [2024-07-10 14:38:30.926910] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:18.850 [2024-07-10 14:38:30.946619] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.850 [2024-07-10 14:38:30.988763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.850 [2024-07-10 14:38:31.117512] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.850 [2024-07-10 14:38:31.117636] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:19.783 14:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.783 14:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:19.783 14:38:31 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:19.783 Running I/O for 10 seconds... 00:20:29.748 00:20:29.748 Latency(us) 00:20:29.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.748 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:29.748 Verification LBA range: start 0x0 length 0x2000 00:20:29.748 TLSTESTn1 : 10.02 3746.75 14.64 0.00 0.00 34095.09 7626.01 34555.35 00:20:29.748 =================================================================================================================== 00:20:29.748 Total : 3746.75 14.64 0.00 0.00 34095.09 7626.01 34555.35 00:20:29.748 0 00:20:29.748 14:38:41 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:29.748 14:38:41 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 101218 00:20:29.748 14:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101218 ']' 00:20:29.748 14:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101218 00:20:29.748 14:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:29.748 14:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:29.748 14:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101218 00:20:29.748 14:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:29.748 14:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:29.748 killing process with pid 101218 00:20:29.748 14:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101218' 00:20:29.748 14:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101218 00:20:29.748 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.748 00:20:29.748 Latency(us) 00:20:29.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.748 =================================================================================================================== 00:20:29.748 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.748 [2024-07-10 14:38:41.932763] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:29.748 14:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101218 00:20:30.006 14:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 101168 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101168 ']' 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101168 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101168 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101168' 00:20:30.007 killing process with pid 101168 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101168 00:20:30.007 [2024-07-10 14:38:42.104238] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101168 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101358 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101358 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101358 ']' 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.007 14:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.265 [2024-07-10 14:38:42.312316] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:30.265 [2024-07-10 14:38:42.312418] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.265 [2024-07-10 14:38:42.434152] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:30.265 [2024-07-10 14:38:42.453401] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.265 [2024-07-10 14:38:42.502268] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.265 [2024-07-10 14:38:42.502336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.265 [2024-07-10 14:38:42.502350] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.265 [2024-07-10 14:38:42.502361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.265 [2024-07-10 14:38:42.502370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.266 [2024-07-10 14:38:42.502404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.200 14:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.200 14:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:31.200 14:38:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:31.200 14:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:31.200 14:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.200 14:38:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.200 14:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.6xvUcQqoIH 00:20:31.200 14:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6xvUcQqoIH 00:20:31.200 14:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:31.458 [2024-07-10 14:38:43.602493] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.458 14:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:31.717 14:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:31.976 [2024-07-10 14:38:44.166620] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:31.976 [2024-07-10 14:38:44.166834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.976 14:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:32.234 malloc0 00:20:32.234 14:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:32.800 14:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6xvUcQqoIH 00:20:33.058 [2024-07-10 14:38:45.105485] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:33.058 14:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=101466 00:20:33.058 14:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:33.058 14:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:33.058 14:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 101466 /var/tmp/bdevperf.sock 00:20:33.058 14:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101466 ']' 00:20:33.058 14:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.058 14:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.058 14:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.058 14:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.058 14:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.058 [2024-07-10 14:38:45.183377] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:33.058 [2024-07-10 14:38:45.183484] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101466 ] 00:20:33.058 [2024-07-10 14:38:45.305350] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:33.058 [2024-07-10 14:38:45.326830] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.315 [2024-07-10 14:38:45.379101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.248 14:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.248 14:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:34.248 14:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6xvUcQqoIH 00:20:34.248 14:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:34.506 [2024-07-10 14:38:46.697861] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.506 nvme0n1 00:20:34.506 14:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:34.762 Running I/O for 1 seconds... 00:20:35.697 00:20:35.697 Latency(us) 00:20:35.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.697 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:35.697 Verification LBA range: start 0x0 length 0x2000 00:20:35.697 nvme0n1 : 1.02 3909.32 15.27 0.00 0.00 32414.01 6553.60 25261.15 00:20:35.697 =================================================================================================================== 00:20:35.697 Total : 3909.32 15.27 0.00 0.00 32414.01 6553.60 25261.15 00:20:35.697 0 00:20:35.697 14:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 101466 00:20:35.697 14:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101466 ']' 00:20:35.697 14:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101466 00:20:35.697 14:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:35.697 14:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.697 14:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101466 00:20:35.697 killing process with pid 101466 00:20:35.697 Received shutdown signal, test time was about 1.000000 seconds 00:20:35.697 00:20:35.697 Latency(us) 00:20:35.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.697 =================================================================================================================== 00:20:35.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.697 14:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:35.697 14:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:35.697 14:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101466' 00:20:35.697 14:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101466 00:20:35.697 14:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101466 00:20:35.956 14:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 101358 00:20:35.956 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101358 ']' 00:20:35.956 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101358 00:20:35.956 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:35.956 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.956 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101358 00:20:35.956 killing process with pid 101358 00:20:35.956 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:35.956 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:35.956 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101358' 00:20:35.956 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101358 00:20:35.956 [2024-07-10 14:38:48.134481] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:35.956 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101358 00:20:36.215 14:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:36.215 14:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:36.215 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.215 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.215 14:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:36.215 14:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101536 00:20:36.215 14:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101536 00:20:36.215 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101536 ']' 00:20:36.215 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.215 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.215 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.215 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.215 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.215 [2024-07-10 14:38:48.350670] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:36.215 [2024-07-10 14:38:48.350798] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.215 [2024-07-10 14:38:48.473892] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:36.215 [2024-07-10 14:38:48.486119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.474 [2024-07-10 14:38:48.520698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.474 [2024-07-10 14:38:48.520743] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.474 [2024-07-10 14:38:48.520769] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.474 [2024-07-10 14:38:48.520778] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.474 [2024-07-10 14:38:48.520786] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.474 [2024-07-10 14:38:48.520811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.474 [2024-07-10 14:38:48.644344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.474 malloc0 00:20:36.474 [2024-07-10 14:38:48.670733] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:36.474 [2024-07-10 14:38:48.670917] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=101572 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 101572 /var/tmp/bdevperf.sock 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101572 ']' 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.474 14:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.733 [2024-07-10 14:38:48.780301] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:36.733 [2024-07-10 14:38:48.780445] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101572 ] 00:20:36.733 [2024-07-10 14:38:48.902434] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:36.733 [2024-07-10 14:38:48.925455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.733 [2024-07-10 14:38:48.973666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.992 14:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.992 14:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:36.992 14:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6xvUcQqoIH 00:20:37.250 14:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:37.507 [2024-07-10 14:38:49.573788] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.507 nvme0n1 00:20:37.507 14:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:37.507 Running I/O for 1 seconds... 00:20:38.884 00:20:38.884 Latency(us) 00:20:38.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.884 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:38.884 Verification LBA range: start 0x0 length 0x2000 00:20:38.884 nvme0n1 : 1.02 3951.96 15.44 0.00 0.00 32062.71 6583.39 25380.31 00:20:38.884 =================================================================================================================== 00:20:38.884 Total : 3951.96 15.44 0.00 0.00 32062.71 6583.39 25380.31 00:20:38.884 0 00:20:38.884 14:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:38.884 14:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.884 14:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.884 14:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.884 14:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:38.884 "subsystems": [ 00:20:38.884 { 00:20:38.884 "subsystem": "keyring", 00:20:38.884 "config": [ 00:20:38.884 { 00:20:38.884 "method": "keyring_file_add_key", 00:20:38.884 "params": { 00:20:38.884 "name": "key0", 00:20:38.884 "path": "/tmp/tmp.6xvUcQqoIH" 00:20:38.884 } 00:20:38.884 } 00:20:38.884 ] 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "subsystem": "iobuf", 00:20:38.884 "config": [ 00:20:38.884 { 00:20:38.884 "method": "iobuf_set_options", 00:20:38.884 "params": { 00:20:38.884 "large_bufsize": 135168, 00:20:38.884 "large_pool_count": 1024, 00:20:38.884 "small_bufsize": 8192, 00:20:38.884 "small_pool_count": 8192 00:20:38.884 } 00:20:38.884 } 00:20:38.884 ] 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "subsystem": "sock", 00:20:38.884 "config": [ 00:20:38.884 { 00:20:38.884 "method": "sock_set_default_impl", 00:20:38.884 "params": { 00:20:38.884 "impl_name": "posix" 00:20:38.884 } 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "method": "sock_impl_set_options", 00:20:38.884 "params": { 00:20:38.884 "enable_ktls": false, 00:20:38.884 "enable_placement_id": 0, 00:20:38.884 "enable_quickack": false, 00:20:38.884 "enable_recv_pipe": true, 00:20:38.884 "enable_zerocopy_send_client": false, 00:20:38.884 "enable_zerocopy_send_server": true, 00:20:38.884 "impl_name": "ssl", 00:20:38.884 "recv_buf_size": 4096, 00:20:38.884 "send_buf_size": 4096, 00:20:38.884 "tls_version": 0, 00:20:38.884 "zerocopy_threshold": 0 00:20:38.884 } 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "method": "sock_impl_set_options", 00:20:38.884 "params": { 00:20:38.884 "enable_ktls": false, 00:20:38.884 "enable_placement_id": 0, 00:20:38.884 "enable_quickack": false, 00:20:38.884 "enable_recv_pipe": true, 00:20:38.884 "enable_zerocopy_send_client": false, 00:20:38.884 "enable_zerocopy_send_server": true, 00:20:38.884 "impl_name": "posix", 00:20:38.884 "recv_buf_size": 2097152, 00:20:38.884 "send_buf_size": 2097152, 00:20:38.884 "tls_version": 0, 00:20:38.884 "zerocopy_threshold": 0 00:20:38.884 } 00:20:38.884 } 00:20:38.884 ] 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "subsystem": "vmd", 00:20:38.884 "config": [] 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "subsystem": "accel", 00:20:38.884 "config": [ 00:20:38.884 { 00:20:38.884 "method": "accel_set_options", 00:20:38.884 "params": { 00:20:38.884 "buf_count": 2048, 00:20:38.884 "large_cache_size": 16, 00:20:38.884 "sequence_count": 2048, 00:20:38.884 "small_cache_size": 128, 00:20:38.884 "task_count": 2048 00:20:38.884 } 00:20:38.884 } 00:20:38.884 ] 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "subsystem": "bdev", 00:20:38.884 "config": [ 00:20:38.884 { 00:20:38.884 "method": "bdev_set_options", 00:20:38.884 "params": { 00:20:38.884 "bdev_auto_examine": true, 00:20:38.884 "bdev_io_cache_size": 256, 00:20:38.884 "bdev_io_pool_size": 65535, 00:20:38.884 "iobuf_large_cache_size": 16, 00:20:38.884 "iobuf_small_cache_size": 128 00:20:38.884 } 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "method": "bdev_raid_set_options", 00:20:38.884 "params": { 00:20:38.884 "process_window_size_kb": 1024 00:20:38.884 } 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "method": "bdev_iscsi_set_options", 00:20:38.884 "params": { 00:20:38.884 "timeout_sec": 30 00:20:38.884 } 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "method": "bdev_nvme_set_options", 00:20:38.884 "params": { 00:20:38.884 "action_on_timeout": "none", 00:20:38.884 "allow_accel_sequence": false, 00:20:38.884 "arbitration_burst": 0, 00:20:38.884 "bdev_retry_count": 3, 00:20:38.884 "ctrlr_loss_timeout_sec": 0, 00:20:38.884 "delay_cmd_submit": true, 00:20:38.884 "dhchap_dhgroups": [ 00:20:38.884 "null", 00:20:38.884 "ffdhe2048", 00:20:38.884 "ffdhe3072", 00:20:38.884 "ffdhe4096", 00:20:38.884 "ffdhe6144", 00:20:38.884 "ffdhe8192" 00:20:38.884 ], 00:20:38.884 "dhchap_digests": [ 00:20:38.884 "sha256", 00:20:38.884 "sha384", 00:20:38.884 "sha512" 00:20:38.884 ], 00:20:38.884 "disable_auto_failback": false, 00:20:38.884 "fast_io_fail_timeout_sec": 0, 00:20:38.884 "generate_uuids": false, 00:20:38.884 "high_priority_weight": 0, 00:20:38.884 "io_path_stat": false, 00:20:38.884 "io_queue_requests": 0, 00:20:38.884 "keep_alive_timeout_ms": 10000, 00:20:38.884 "low_priority_weight": 0, 00:20:38.884 "medium_priority_weight": 0, 00:20:38.884 "nvme_adminq_poll_period_us": 10000, 00:20:38.884 "nvme_error_stat": false, 00:20:38.884 "nvme_ioq_poll_period_us": 0, 00:20:38.884 "rdma_cm_event_timeout_ms": 0, 00:20:38.884 "rdma_max_cq_size": 0, 00:20:38.884 "rdma_srq_size": 0, 00:20:38.884 "reconnect_delay_sec": 0, 00:20:38.884 "timeout_admin_us": 0, 00:20:38.884 "timeout_us": 0, 00:20:38.884 "transport_ack_timeout": 0, 00:20:38.884 "transport_retry_count": 4, 00:20:38.884 "transport_tos": 0 00:20:38.884 } 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "method": "bdev_nvme_set_hotplug", 00:20:38.884 "params": { 00:20:38.884 "enable": false, 00:20:38.884 "period_us": 100000 00:20:38.884 } 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "method": "bdev_malloc_create", 00:20:38.884 "params": { 00:20:38.884 "block_size": 4096, 00:20:38.884 "name": "malloc0", 00:20:38.884 "num_blocks": 8192, 00:20:38.884 "optimal_io_boundary": 0, 00:20:38.884 "physical_block_size": 4096, 00:20:38.884 "uuid": "3057b000-fe2c-4fa3-af20-cc73ada6d64f" 00:20:38.884 } 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "method": "bdev_wait_for_examine" 00:20:38.884 } 00:20:38.884 ] 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "subsystem": "nbd", 00:20:38.884 "config": [] 00:20:38.884 }, 00:20:38.884 { 00:20:38.884 "subsystem": "scheduler", 00:20:38.884 "config": [ 00:20:38.884 { 00:20:38.884 "method": "framework_set_scheduler", 00:20:38.884 "params": { 00:20:38.885 "name": "static" 00:20:38.885 } 00:20:38.885 } 00:20:38.885 ] 00:20:38.885 }, 00:20:38.885 { 00:20:38.885 "subsystem": "nvmf", 00:20:38.885 "config": [ 00:20:38.885 { 00:20:38.885 "method": "nvmf_set_config", 00:20:38.885 "params": { 00:20:38.885 "admin_cmd_passthru": { 00:20:38.885 "identify_ctrlr": false 00:20:38.885 }, 00:20:38.885 "discovery_filter": "match_any" 00:20:38.885 } 00:20:38.885 }, 00:20:38.885 { 00:20:38.885 "method": "nvmf_set_max_subsystems", 00:20:38.885 "params": { 00:20:38.885 "max_subsystems": 1024 00:20:38.885 } 00:20:38.885 }, 00:20:38.885 { 00:20:38.885 "method": "nvmf_set_crdt", 00:20:38.885 "params": { 00:20:38.885 "crdt1": 0, 00:20:38.885 "crdt2": 0, 00:20:38.885 "crdt3": 0 00:20:38.885 } 00:20:38.885 }, 00:20:38.885 { 00:20:38.885 "method": "nvmf_create_transport", 00:20:38.885 "params": { 00:20:38.885 "abort_timeout_sec": 1, 00:20:38.885 "ack_timeout": 0, 00:20:38.885 "buf_cache_size": 4294967295, 00:20:38.885 "c2h_success": false, 00:20:38.885 "data_wr_pool_size": 0, 00:20:38.885 "dif_insert_or_strip": false, 00:20:38.885 "in_capsule_data_size": 4096, 00:20:38.885 "io_unit_size": 131072, 00:20:38.885 "max_aq_depth": 128, 00:20:38.885 "max_io_qpairs_per_ctrlr": 127, 00:20:38.885 "max_io_size": 131072, 00:20:38.885 "max_queue_depth": 128, 00:20:38.885 "num_shared_buffers": 511, 00:20:38.885 "sock_priority": 0, 00:20:38.885 "trtype": "TCP", 00:20:38.885 "zcopy": false 00:20:38.885 } 00:20:38.885 }, 00:20:38.885 { 00:20:38.885 "method": "nvmf_create_subsystem", 00:20:38.885 "params": { 00:20:38.885 "allow_any_host": false, 00:20:38.885 "ana_reporting": false, 00:20:38.885 "max_cntlid": 65519, 00:20:38.885 "max_namespaces": 32, 00:20:38.885 "min_cntlid": 1, 00:20:38.885 "model_number": "SPDK bdev Controller", 00:20:38.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.885 "serial_number": "00000000000000000000" 00:20:38.885 } 00:20:38.885 }, 00:20:38.885 { 00:20:38.885 "method": "nvmf_subsystem_add_host", 00:20:38.885 "params": { 00:20:38.885 "host": "nqn.2016-06.io.spdk:host1", 00:20:38.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.885 "psk": "key0" 00:20:38.885 } 00:20:38.885 }, 00:20:38.885 { 00:20:38.885 "method": "nvmf_subsystem_add_ns", 00:20:38.885 "params": { 00:20:38.885 "namespace": { 00:20:38.885 "bdev_name": "malloc0", 00:20:38.885 "nguid": "3057B000FE2C4FA3AF20CC73ADA6D64F", 00:20:38.885 "no_auto_visible": false, 00:20:38.885 "nsid": 1, 00:20:38.885 "uuid": "3057b000-fe2c-4fa3-af20-cc73ada6d64f" 00:20:38.885 }, 00:20:38.885 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:38.885 } 00:20:38.885 }, 00:20:38.885 { 00:20:38.885 "method": "nvmf_subsystem_add_listener", 00:20:38.885 "params": { 00:20:38.885 "listen_address": { 00:20:38.885 "adrfam": "IPv4", 00:20:38.885 "traddr": "10.0.0.2", 00:20:38.885 "trsvcid": "4420", 00:20:38.885 "trtype": "TCP" 00:20:38.885 }, 00:20:38.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.885 "secure_channel": true 00:20:38.885 } 00:20:38.885 } 00:20:38.885 ] 00:20:38.885 } 00:20:38.885 ] 00:20:38.885 }' 00:20:38.885 14:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:39.144 14:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:39.144 "subsystems": [ 00:20:39.144 { 00:20:39.144 "subsystem": "keyring", 00:20:39.144 "config": [ 00:20:39.144 { 00:20:39.144 "method": "keyring_file_add_key", 00:20:39.144 "params": { 00:20:39.144 "name": "key0", 00:20:39.144 "path": "/tmp/tmp.6xvUcQqoIH" 00:20:39.144 } 00:20:39.144 } 00:20:39.144 ] 00:20:39.144 }, 00:20:39.144 { 00:20:39.144 "subsystem": "iobuf", 00:20:39.144 "config": [ 00:20:39.144 { 00:20:39.144 "method": "iobuf_set_options", 00:20:39.144 "params": { 00:20:39.144 "large_bufsize": 135168, 00:20:39.144 "large_pool_count": 1024, 00:20:39.144 "small_bufsize": 8192, 00:20:39.144 "small_pool_count": 8192 00:20:39.144 } 00:20:39.144 } 00:20:39.144 ] 00:20:39.144 }, 00:20:39.144 { 00:20:39.144 "subsystem": "sock", 00:20:39.144 "config": [ 00:20:39.144 { 00:20:39.144 "method": "sock_set_default_impl", 00:20:39.144 "params": { 00:20:39.144 "impl_name": "posix" 00:20:39.144 } 00:20:39.144 }, 00:20:39.144 { 00:20:39.144 "method": "sock_impl_set_options", 00:20:39.144 "params": { 00:20:39.144 "enable_ktls": false, 00:20:39.144 "enable_placement_id": 0, 00:20:39.144 "enable_quickack": false, 00:20:39.144 "enable_recv_pipe": true, 00:20:39.144 "enable_zerocopy_send_client": false, 00:20:39.144 "enable_zerocopy_send_server": true, 00:20:39.145 "impl_name": "ssl", 00:20:39.145 "recv_buf_size": 4096, 00:20:39.145 "send_buf_size": 4096, 00:20:39.145 "tls_version": 0, 00:20:39.145 "zerocopy_threshold": 0 00:20:39.145 } 00:20:39.145 }, 00:20:39.145 { 00:20:39.145 "method": "sock_impl_set_options", 00:20:39.145 "params": { 00:20:39.145 "enable_ktls": false, 00:20:39.145 "enable_placement_id": 0, 00:20:39.145 "enable_quickack": false, 00:20:39.145 "enable_recv_pipe": true, 00:20:39.145 "enable_zerocopy_send_client": false, 00:20:39.145 "enable_zerocopy_send_server": true, 00:20:39.145 "impl_name": "posix", 00:20:39.145 "recv_buf_size": 2097152, 00:20:39.145 "send_buf_size": 2097152, 00:20:39.145 "tls_version": 0, 00:20:39.145 "zerocopy_threshold": 0 00:20:39.145 } 00:20:39.145 } 00:20:39.145 ] 00:20:39.145 }, 00:20:39.145 { 00:20:39.145 "subsystem": "vmd", 00:20:39.145 "config": [] 00:20:39.145 }, 00:20:39.145 { 00:20:39.145 "subsystem": "accel", 00:20:39.145 "config": [ 00:20:39.145 { 00:20:39.145 "method": "accel_set_options", 00:20:39.145 "params": { 00:20:39.145 "buf_count": 2048, 00:20:39.145 "large_cache_size": 16, 00:20:39.145 "sequence_count": 2048, 00:20:39.145 "small_cache_size": 128, 00:20:39.145 "task_count": 2048 00:20:39.145 } 00:20:39.145 } 00:20:39.145 ] 00:20:39.145 }, 00:20:39.145 { 00:20:39.145 "subsystem": "bdev", 00:20:39.145 "config": [ 00:20:39.145 { 00:20:39.145 "method": "bdev_set_options", 00:20:39.145 "params": { 00:20:39.145 "bdev_auto_examine": true, 00:20:39.145 "bdev_io_cache_size": 256, 00:20:39.145 "bdev_io_pool_size": 65535, 00:20:39.145 "iobuf_large_cache_size": 16, 00:20:39.145 "iobuf_small_cache_size": 128 00:20:39.145 } 00:20:39.145 }, 00:20:39.145 { 00:20:39.145 "method": "bdev_raid_set_options", 00:20:39.145 "params": { 00:20:39.145 "process_window_size_kb": 1024 00:20:39.145 } 00:20:39.145 }, 00:20:39.145 { 00:20:39.145 "method": "bdev_iscsi_set_options", 00:20:39.145 "params": { 00:20:39.145 "timeout_sec": 30 00:20:39.145 } 00:20:39.145 }, 00:20:39.145 { 00:20:39.145 "method": "bdev_nvme_set_options", 00:20:39.145 "params": { 00:20:39.145 "action_on_timeout": "none", 00:20:39.145 "allow_accel_sequence": false, 00:20:39.145 "arbitration_burst": 0, 00:20:39.145 "bdev_retry_count": 3, 00:20:39.145 "ctrlr_loss_timeout_sec": 0, 00:20:39.145 "delay_cmd_submit": true, 00:20:39.145 "dhchap_dhgroups": [ 00:20:39.145 "null", 00:20:39.145 "ffdhe2048", 00:20:39.145 "ffdhe3072", 00:20:39.145 "ffdhe4096", 00:20:39.145 "ffdhe6144", 00:20:39.145 "ffdhe8192" 00:20:39.145 ], 00:20:39.145 "dhchap_digests": [ 00:20:39.145 "sha256", 00:20:39.145 "sha384", 00:20:39.145 "sha512" 00:20:39.145 ], 00:20:39.145 "disable_auto_failback": false, 00:20:39.145 "fast_io_fail_timeout_sec": 0, 00:20:39.145 "generate_uuids": false, 00:20:39.145 "high_priority_weight": 0, 00:20:39.145 "io_path_stat": false, 00:20:39.145 "io_queue_requests": 512, 00:20:39.145 "keep_alive_timeout_ms": 10000, 00:20:39.145 "low_priority_weight": 0, 00:20:39.145 "medium_priority_weight": 0, 00:20:39.145 "nvme_adminq_poll_period_us": 10000, 00:20:39.145 "nvme_error_stat": false, 00:20:39.145 "nvme_ioq_poll_period_us": 0, 00:20:39.145 "rdma_cm_event_timeout_ms": 0, 00:20:39.145 "rdma_max_cq_size": 0, 00:20:39.145 "rdma_srq_size": 0, 00:20:39.145 "reconnect_delay_sec": 0, 00:20:39.145 "timeout_admin_us": 0, 00:20:39.145 "timeout_us": 0, 00:20:39.145 "transport_ack_timeout": 0, 00:20:39.145 "transport_retry_count": 4, 00:20:39.145 "transport_tos": 0 00:20:39.145 } 00:20:39.145 }, 00:20:39.145 { 00:20:39.145 "method": "bdev_nvme_attach_controller", 00:20:39.145 "params": { 00:20:39.145 "adrfam": "IPv4", 00:20:39.145 "ctrlr_loss_timeout_sec": 0, 00:20:39.145 "ddgst": false, 00:20:39.145 "fast_io_fail_timeout_sec": 0, 00:20:39.145 "hdgst": false, 00:20:39.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.145 "name": "nvme0", 00:20:39.145 "prchk_guard": false, 00:20:39.145 "prchk_reftag": false, 00:20:39.145 "psk": "key0", 00:20:39.145 "reconnect_delay_sec": 0, 00:20:39.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.145 "traddr": "10.0.0.2", 00:20:39.145 "trsvcid": "4420", 00:20:39.145 "trtype": "TCP" 00:20:39.145 } 00:20:39.145 }, 00:20:39.145 { 00:20:39.145 "method": "bdev_nvme_set_hotplug", 00:20:39.145 "params": { 00:20:39.145 "enable": false, 00:20:39.145 "period_us": 100000 00:20:39.145 } 00:20:39.145 }, 00:20:39.145 { 00:20:39.145 "method": "bdev_enable_histogram", 00:20:39.145 "params": { 00:20:39.145 "enable": true, 00:20:39.145 "name": "nvme0n1" 00:20:39.145 } 00:20:39.145 }, 00:20:39.145 { 00:20:39.145 "method": "bdev_wait_for_examine" 00:20:39.145 } 00:20:39.145 ] 00:20:39.145 }, 00:20:39.145 { 00:20:39.145 "subsystem": "nbd", 00:20:39.145 "config": [] 00:20:39.145 } 00:20:39.145 ] 00:20:39.145 }' 00:20:39.145 14:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 101572 00:20:39.145 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101572 ']' 00:20:39.145 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101572 00:20:39.145 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:39.145 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.145 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101572 00:20:39.145 killing process with pid 101572 00:20:39.145 Received shutdown signal, test time was about 1.000000 seconds 00:20:39.145 00:20:39.145 Latency(us) 00:20:39.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.145 =================================================================================================================== 00:20:39.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.145 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:39.145 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:39.145 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101572' 00:20:39.145 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101572 00:20:39.145 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101572 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 101536 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101536 ']' 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101536 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101536 00:20:39.404 killing process with pid 101536 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101536' 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101536 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101536 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:39.404 14:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:39.404 "subsystems": [ 00:20:39.404 { 00:20:39.404 "subsystem": "keyring", 00:20:39.404 "config": [ 00:20:39.404 { 00:20:39.404 "method": "keyring_file_add_key", 00:20:39.404 "params": { 00:20:39.404 "name": "key0", 00:20:39.404 "path": "/tmp/tmp.6xvUcQqoIH" 00:20:39.404 } 00:20:39.404 } 00:20:39.404 ] 00:20:39.404 }, 00:20:39.404 { 00:20:39.404 "subsystem": "iobuf", 00:20:39.404 "config": [ 00:20:39.404 { 00:20:39.404 "method": "iobuf_set_options", 00:20:39.404 "params": { 00:20:39.404 "large_bufsize": 135168, 00:20:39.404 "large_pool_count": 1024, 00:20:39.404 "small_bufsize": 8192, 00:20:39.404 "small_pool_count": 8192 00:20:39.404 } 00:20:39.404 } 00:20:39.404 ] 00:20:39.404 }, 00:20:39.404 { 00:20:39.404 "subsystem": "sock", 00:20:39.404 "config": [ 00:20:39.404 { 00:20:39.404 "method": "sock_set_default_impl", 00:20:39.404 "params": { 00:20:39.404 "impl_name": "posix" 00:20:39.404 } 00:20:39.404 }, 00:20:39.404 { 00:20:39.404 "method": "sock_impl_set_options", 00:20:39.404 "params": { 00:20:39.404 "enable_ktls": false, 00:20:39.404 "enable_placement_id": 0, 00:20:39.404 "enable_quickack": false, 00:20:39.404 "enable_recv_pipe": true, 00:20:39.404 "enable_zerocopy_send_client": false, 00:20:39.404 "enable_zerocopy_send_server": true, 00:20:39.404 "impl_name": "ssl", 00:20:39.404 "recv_buf_size": 4096, 00:20:39.404 "send_buf_size": 4096, 00:20:39.404 "tls_version": 0, 00:20:39.404 "zerocopy_threshold": 0 00:20:39.404 } 00:20:39.404 }, 00:20:39.404 { 00:20:39.405 "method": "sock_impl_set_options", 00:20:39.405 "params": { 00:20:39.405 "enable_ktls": false, 00:20:39.405 "enable_placement_id": 0, 00:20:39.405 "enable_quickack": false, 00:20:39.405 "enable_recv_pipe": true, 00:20:39.405 "enable_zerocopy_send_client": false, 00:20:39.405 "enable_zerocopy_send_server": true, 00:20:39.405 "impl_name": "posix", 00:20:39.405 "recv_buf_size": 2097152, 00:20:39.405 "send_buf_size": 2097152, 00:20:39.405 "tls_version": 0, 00:20:39.405 "zerocopy_threshold": 0 00:20:39.405 } 00:20:39.405 } 00:20:39.405 ] 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "subsystem": "vmd", 00:20:39.405 "config": [] 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "subsystem": "accel", 00:20:39.405 "config": [ 00:20:39.405 { 00:20:39.405 "method": "accel_set_options", 00:20:39.405 "params": { 00:20:39.405 "buf_count": 2048, 00:20:39.405 "large_cache_size": 16, 00:20:39.405 "sequence_count": 2048, 00:20:39.405 "small_cache_size": 128, 00:20:39.405 "task_count": 2048 00:20:39.405 } 00:20:39.405 } 00:20:39.405 ] 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "subsystem": "bdev", 00:20:39.405 "config": [ 00:20:39.405 { 00:20:39.405 "method": "bdev_set_options", 00:20:39.405 "params": { 00:20:39.405 "bdev_auto_examine": true, 00:20:39.405 "bdev_io_cache_size": 256, 00:20:39.405 "bdev_io_pool_size": 65535, 00:20:39.405 "iobuf_large_cache_size": 16, 00:20:39.405 "iobuf_small_cache_size": 128 00:20:39.405 } 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "method": "bdev_raid_set_options", 00:20:39.405 "params": { 00:20:39.405 "process_window_size_kb": 1024 00:20:39.405 } 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "method": "bdev_iscsi_set_options", 00:20:39.405 "params": { 00:20:39.405 "timeout_sec": 30 00:20:39.405 } 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "method": "bdev_nvme_set_options", 00:20:39.405 "params": { 00:20:39.405 "action_on_timeout": "none", 00:20:39.405 "allow_accel_sequence": false, 00:20:39.405 "arbitration_burst": 0, 00:20:39.405 "bdev_retry_count": 3, 00:20:39.405 "ctrlr_loss_timeout_sec": 0, 00:20:39.405 "delay_cmd_submit": true, 00:20:39.405 "dhchap_dhgroups": [ 00:20:39.405 "null", 00:20:39.405 "ffdhe2048", 00:20:39.405 "ffdhe3072", 00:20:39.405 "ffdhe4096", 00:20:39.405 "ffdhe6144", 00:20:39.405 "ffdhe8192" 00:20:39.405 ], 00:20:39.405 "dhchap_digests": [ 00:20:39.405 "sha256", 00:20:39.405 "sha384", 00:20:39.405 "sha512" 00:20:39.405 ], 00:20:39.405 "disable_auto_failback": false, 00:20:39.405 "fast_io_fail_timeout_sec": 0, 00:20:39.405 "generate_uuids": false, 00:20:39.405 "high_priority_weight": 0, 00:20:39.405 "io_path_stat": false, 00:20:39.405 "io_queue_requests": 0, 00:20:39.405 "keep_alive_timeout_ms": 10000, 00:20:39.405 "low_priority_weight": 0, 00:20:39.405 "medium_priority_weight": 0, 00:20:39.405 "nvme_adminq_poll_period_us": 10000, 00:20:39.405 "nvme_error_stat": false, 00:20:39.405 "nvme_ioq_poll_period_us": 0, 00:20:39.405 "rdma_cm_event_timeout_ms": 0, 00:20:39.405 "rdma_max_cq_size": 0, 00:20:39.405 "rdma_srq_size": 0, 00:20:39.405 "reconnect_delay_sec": 0, 00:20:39.405 "timeout_admin_us": 0, 00:20:39.405 "timeout_us": 0, 00:20:39.405 "transport_ack_timeout": 0, 00:20:39.405 "transport_retry_count": 4, 00:20:39.405 "transport_tos": 0 00:20:39.405 } 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "method": "bdev_nvme_set_hotplug", 00:20:39.405 "params": { 00:20:39.405 "enable": false, 00:20:39.405 "period_us": 100000 00:20:39.405 } 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "method": "bdev_malloc_create", 00:20:39.405 "params": { 00:20:39.405 "block_size": 4096, 00:20:39.405 "name": "malloc0", 00:20:39.405 "num_blocks": 8192, 00:20:39.405 "optimal_io_boundary": 0, 00:20:39.405 "physical_block_size": 4096, 00:20:39.405 "uuid": "3057b000-fe2c-4fa3-af20-cc73ada6d64f" 00:20:39.405 } 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "method": "bdev_wait_for_examine" 00:20:39.405 } 00:20:39.405 ] 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "subsystem": "nbd", 00:20:39.405 "config": [] 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "subsystem": "scheduler", 00:20:39.405 "config": [ 00:20:39.405 { 00:20:39.405 "method": "framework_set_scheduler", 00:20:39.405 "params": { 00:20:39.405 "name": "static" 00:20:39.405 } 00:20:39.405 } 00:20:39.405 ] 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "subsystem": "nvmf", 00:20:39.405 "config": [ 00:20:39.405 { 00:20:39.405 "method": "nvmf_set_config", 00:20:39.405 "params": { 00:20:39.405 "admin_cmd_passthru": { 00:20:39.405 "identify_ctrlr": false 00:20:39.405 }, 00:20:39.405 "discovery_filter": "match_any" 00:20:39.405 } 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "method": "nvmf_set_max_subsystems", 00:20:39.405 "params": { 00:20:39.405 "max_subsystems": 1024 00:20:39.405 } 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "method": "nvmf_set_crdt", 00:20:39.405 "params": { 00:20:39.405 "crdt1": 0, 00:20:39.405 "crdt2": 0, 00:20:39.405 "crdt3": 0 00:20:39.405 } 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "method": "nvmf_create_transport", 00:20:39.405 "params": { 00:20:39.405 "abort_timeout_sec": 1, 00:20:39.405 "ack_timeout": 0, 00:20:39.405 "buf_cache_size": 4294967295, 00:20:39.405 "c2h_success": false, 00:20:39.405 "data_wr_pool_size": 0, 00:20:39.405 "dif_insert_or_strip": false, 00:20:39.405 "in_capsule_data_size": 4096, 00:20:39.405 "io_unit_size": 131072, 00:20:39.405 "max_aq_depth": 128, 00:20:39.405 "max_io_qpairs_per_ctrlr": 127, 00:20:39.405 "max_io_size": 131072, 00:20:39.405 "max_queue_depth": 128, 00:20:39.405 "num_shared_buffers": 511, 00:20:39.405 "sock_priority": 0, 00:20:39.405 "trtype": "TCP", 00:20:39.405 "zcopy": false 00:20:39.405 } 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "method": "nvmf_create_subsystem", 00:20:39.405 "params": { 00:20:39.405 "allow_any_host": false, 00:20:39.405 "ana_reporting": false, 00:20:39.405 "max_cntlid": 65519, 00:20:39.405 "max_namespaces": 32, 00:20:39.405 "min_cntlid": 1, 00:20:39.405 "model_number": "SPDK bdev Controller", 00:20:39.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.405 "serial_number": "00000000000000000000" 00:20:39.405 } 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "method": "nvmf_subsystem_add_host", 00:20:39.405 "params": { 00:20:39.405 "host": "nqn.2016-06.io.spdk:host1", 00:20:39.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.405 "psk": "key0" 00:20:39.405 } 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "method": "nvmf_subsystem_add_ns", 00:20:39.405 "params": { 00:20:39.405 "namespace": { 00:20:39.405 "bdev_name": "malloc0", 00:20:39.405 "nguid": "3057B000FE2C4FA3AF20CC73ADA6D64F", 00:20:39.405 "no_auto_visible": false, 00:20:39.405 "nsid": 1, 00:20:39.405 "uuid": "3057b000-fe2c-4fa3-af20-cc73ada6d64f" 00:20:39.405 }, 00:20:39.405 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:39.405 } 00:20:39.405 }, 00:20:39.405 { 00:20:39.405 "method": "nvmf_subsystem_add_listener", 00:20:39.405 "params": { 00:20:39.405 "listen_address": { 00:20:39.405 "adrfam": "IPv4", 00:20:39.405 "traddr": "10.0.0.2", 00:20:39.405 "trsvcid": "4420", 00:20:39.405 "trtype": "TCP" 00:20:39.405 }, 00:20:39.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.405 "secure_channel": true 00:20:39.405 } 00:20:39.405 } 00:20:39.405 ] 00:20:39.405 } 00:20:39.405 ] 00:20:39.405 }' 00:20:39.405 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.405 14:38:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101644 00:20:39.405 14:38:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:39.405 14:38:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101644 00:20:39.405 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101644 ']' 00:20:39.405 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.405 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.405 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.405 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.405 14:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.664 [2024-07-10 14:38:51.710613] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:39.664 [2024-07-10 14:38:51.710725] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.664 [2024-07-10 14:38:51.834870] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:39.664 [2024-07-10 14:38:51.847086] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.664 [2024-07-10 14:38:51.881981] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.664 [2024-07-10 14:38:51.882034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.664 [2024-07-10 14:38:51.882045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.664 [2024-07-10 14:38:51.882053] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.664 [2024-07-10 14:38:51.882061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.664 [2024-07-10 14:38:51.882141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.922 [2024-07-10 14:38:52.068851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.922 [2024-07-10 14:38:52.100778] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.922 [2024-07-10 14:38:52.101007] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=101688 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 101688 /var/tmp/bdevperf.sock 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101688 ']' 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:40.490 "subsystems": [ 00:20:40.490 { 00:20:40.490 "subsystem": "keyring", 00:20:40.490 "config": [ 00:20:40.490 { 00:20:40.490 "method": "keyring_file_add_key", 00:20:40.490 "params": { 00:20:40.490 "name": "key0", 00:20:40.490 "path": "/tmp/tmp.6xvUcQqoIH" 00:20:40.490 } 00:20:40.490 } 00:20:40.490 ] 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "subsystem": "iobuf", 00:20:40.490 "config": [ 00:20:40.490 { 00:20:40.490 "method": "iobuf_set_options", 00:20:40.490 "params": { 00:20:40.490 "large_bufsize": 135168, 00:20:40.490 "large_pool_count": 1024, 00:20:40.490 "small_bufsize": 8192, 00:20:40.490 "small_pool_count": 8192 00:20:40.490 } 00:20:40.490 } 00:20:40.490 ] 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "subsystem": "sock", 00:20:40.490 "config": [ 00:20:40.490 { 00:20:40.490 "method": "sock_set_default_impl", 00:20:40.490 "params": { 00:20:40.490 "impl_name": "posix" 00:20:40.490 } 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "method": "sock_impl_set_options", 00:20:40.490 "params": { 00:20:40.490 "enable_ktls": false, 00:20:40.490 "enable_placement_id": 0, 00:20:40.490 "enable_quickack": false, 00:20:40.490 "enable_recv_pipe": true, 00:20:40.490 "enable_zerocopy_send_client": false, 00:20:40.490 "enable_zerocopy_send_server": true, 00:20:40.490 "impl_name": "ssl", 00:20:40.490 "recv_buf_size": 4096, 00:20:40.490 "send_buf_size": 4096, 00:20:40.490 "tls_version": 0, 00:20:40.490 "zerocopy_threshold": 0 00:20:40.490 } 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "method": "sock_impl_set_options", 00:20:40.490 "params": { 00:20:40.490 "enable_ktls": false, 00:20:40.490 "enable_placement_id": 0, 00:20:40.490 "enable_quickack": false, 00:20:40.490 "enable_recv_pipe": true, 00:20:40.490 "enable_zerocopy_send_client": false, 00:20:40.490 "enable_zerocopy_send_server": true, 00:20:40.490 "impl_name": "posix", 00:20:40.490 "recv_buf_size": 2097152, 00:20:40.490 "send_buf_size": 2097152, 00:20:40.490 "tls_version": 0, 00:20:40.490 "zerocopy_threshold": 0 00:20:40.490 } 00:20:40.490 } 00:20:40.490 ] 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "subsystem": "vmd", 00:20:40.490 "config": [] 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "subsystem": "accel", 00:20:40.490 "config": [ 00:20:40.490 { 00:20:40.490 "method": "accel_set_options", 00:20:40.490 "params": { 00:20:40.490 "buf_count": 2048, 00:20:40.490 "large_cache_size": 16, 00:20:40.490 "sequence_count": 2048, 00:20:40.490 "small_cache_size": 128, 00:20:40.490 "task_count": 2048 00:20:40.490 } 00:20:40.490 } 00:20:40.490 ] 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "subsystem": "bdev", 00:20:40.490 "config": [ 00:20:40.490 { 00:20:40.490 "method": "bdev_set_options", 00:20:40.490 "params": { 00:20:40.490 "bdev_auto_examine": true, 00:20:40.490 "bdev_io_cache_size": 256, 00:20:40.490 "bdev_io_pool_size": 65535, 00:20:40.490 "iobuf_large_cache_size": 16, 00:20:40.490 "iobuf_small_cache_size": 128 00:20:40.490 } 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "method": "bdev_raid_set_options", 00:20:40.490 "params": { 00:20:40.490 "process_window_size_kb": 1024 00:20:40.490 } 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "method": "bdev_iscsi_set_options", 00:20:40.490 "params": { 00:20:40.490 "timeout_sec": 30 00:20:40.490 } 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "method": "bdev_nvme_set_options", 00:20:40.490 "params": { 00:20:40.490 "action_on_timeout": "none", 00:20:40.490 "allow_accel_sequence": false, 00:20:40.490 "arbitration_burst": 0, 00:20:40.490 "bdev_retry_count": 3, 00:20:40.490 "ctrlr_loss_timeout_sec": 0, 00:20:40.490 "delay_cmd_submit": true, 00:20:40.490 "dhchap_dhgroups": [ 00:20:40.490 "null", 00:20:40.490 "ffdhe2048", 00:20:40.490 "ffdhe3072", 00:20:40.490 "ffdhe4096", 00:20:40.490 "ffdhe6144", 00:20:40.490 "ffdhe8192" 00:20:40.490 ], 00:20:40.490 "dhchap_digests": [ 00:20:40.490 "sha256", 00:20:40.490 "sha384", 00:20:40.490 "sha512" 00:20:40.490 ], 00:20:40.490 "disable_auto_failback": false, 00:20:40.490 "fast_io_fail_timeout_sec": 0, 00:20:40.490 "generate_uuids": false, 00:20:40.490 "high_priority_weight": 0, 00:20:40.490 "io_path_stat": false, 00:20:40.490 "io_queue_requests": 512, 00:20:40.490 "keep_alive_timeout_ms": 10000, 00:20:40.490 "low_priority_weight": 0, 00:20:40.490 "medium_priority_weight": 0, 00:20:40.490 "nvme_adminq_poll_period_us": 10000, 00:20:40.490 "nvme_error_stat": false, 00:20:40.490 "nvme_ioq_poll_period_us": 0, 00:20:40.490 "rdma_cm_event_timeout_ms": 0, 00:20:40.490 "rdma_max_cq_size": 0, 00:20:40.490 "rdma_srq_size": 0, 00:20:40.490 "reconnect_delay_sec": 0, 00:20:40.490 "timeout_admin_us": 0, 00:20:40.490 "timeout_us": 0, 00:20:40.490 "transport_ack_timeout": 0, 00:20:40.490 "transport_retry_count": 4, 00:20:40.490 "transport_tos": 0 00:20:40.490 } 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "method": "bdev_nvme_attach_controller", 00:20:40.490 "params": { 00:20:40.490 "adrfam": "IPv4", 00:20:40.490 "ctrlr_loss_timeout_sec": 0, 00:20:40.490 "ddgst": false, 00:20:40.490 "fast_io_fail_timeout_sec": 0, 00:20:40.490 "hdgst": false, 00:20:40.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.490 "name": "nvme0", 00:20:40.490 "prchk_guard": false, 00:20:40.490 "prchk_reftag": false, 00:20:40.490 "psk": "key0", 00:20:40.490 "reconnect_delay_sec": 0, 00:20:40.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.490 "traddr": "10.0.0.2", 00:20:40.490 "trsvcid": "4420", 00:20:40.490 "trtype": "TCP" 00:20:40.490 } 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "method": "bdev_nvme_set_hotplug", 00:20:40.490 "params": { 00:20:40.490 "enable": false, 00:20:40.490 "period_us": 100000 00:20:40.490 } 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "method": "bdev_enable_histogram", 00:20:40.490 "params": { 00:20:40.490 "enable": true, 00:20:40.490 "name": "nvme0n1" 00:20:40.490 } 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "method": "bdev_wait_for_examine" 00:20:40.490 } 00:20:40.490 ] 00:20:40.490 }, 00:20:40.490 { 00:20:40.490 "subsystem": "nbd", 00:20:40.490 "config": [] 00:20:40.490 } 00:20:40.490 ] 00:20:40.490 }' 00:20:40.490 14:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.749 [2024-07-10 14:38:52.836641] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:40.749 [2024-07-10 14:38:52.836765] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101688 ] 00:20:40.749 [2024-07-10 14:38:52.959884] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:40.749 [2024-07-10 14:38:52.978542] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.749 [2024-07-10 14:38:53.019773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.011 [2024-07-10 14:38:53.154196] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.581 14:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.581 14:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:41.582 14:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:41.582 14:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:42.147 14:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.147 14:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:42.147 Running I/O for 1 seconds... 00:20:43.083 00:20:43.083 Latency(us) 00:20:43.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.083 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:43.083 Verification LBA range: start 0x0 length 0x2000 00:20:43.084 nvme0n1 : 1.02 3959.71 15.47 0.00 0.00 31978.09 7804.74 27286.81 00:20:43.084 =================================================================================================================== 00:20:43.084 Total : 3959.71 15.47 0.00 0.00 31978.09 7804.74 27286.81 00:20:43.084 0 00:20:43.084 14:38:55 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:43.084 14:38:55 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:43.084 14:38:55 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:43.084 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:43.084 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:43.084 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:43.084 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:43.084 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:43.084 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:43.084 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:43.084 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:43.084 nvmf_trace.0 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 101688 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101688 ']' 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101688 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101688 00:20:43.342 killing process with pid 101688 00:20:43.342 Received shutdown signal, test time was about 1.000000 seconds 00:20:43.342 00:20:43.342 Latency(us) 00:20:43.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.342 =================================================================================================================== 00:20:43.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101688' 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101688 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101688 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:43.342 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:43.600 rmmod nvme_tcp 00:20:43.600 rmmod nvme_fabrics 00:20:43.600 rmmod nvme_keyring 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 101644 ']' 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 101644 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101644 ']' 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101644 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101644 00:20:43.600 killing process with pid 101644 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101644' 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101644 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101644 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.600 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.860 14:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:43.860 14:38:55 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.kfisWWiByd /tmp/tmp.mg5DatZNCS /tmp/tmp.6xvUcQqoIH 00:20:43.860 ************************************ 00:20:43.860 END TEST nvmf_tls 00:20:43.860 ************************************ 00:20:43.860 00:20:43.860 real 1m16.501s 00:20:43.860 user 2m0.738s 00:20:43.860 sys 0m26.239s 00:20:43.860 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:43.860 14:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.860 14:38:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:43.860 14:38:55 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:43.860 14:38:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:43.860 14:38:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:43.860 14:38:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:43.860 ************************************ 00:20:43.860 START TEST nvmf_fips 00:20:43.860 ************************************ 00:20:43.860 14:38:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:43.860 * Looking for test storage... 00:20:43.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:43.860 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:43.861 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:44.121 Error setting digest 00:20:44.121 00A2B761987F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:44.121 00A2B761987F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:44.121 Cannot find device "nvmf_tgt_br" 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:44.121 Cannot find device "nvmf_tgt_br2" 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:44.121 Cannot find device "nvmf_tgt_br" 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:44.121 Cannot find device "nvmf_tgt_br2" 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:44.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:44.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:44.121 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:44.122 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:44.122 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:44.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:20:44.380 00:20:44.380 --- 10.0.0.2 ping statistics --- 00:20:44.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.380 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:44.380 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:44.380 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:20:44.380 00:20:44.380 --- 10.0.0.3 ping statistics --- 00:20:44.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.380 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:44.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:44.380 00:20:44.380 --- 10.0.0.1 ping statistics --- 00:20:44.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.380 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=101971 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 101971 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:44.380 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 101971 ']' 00:20:44.381 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.381 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.381 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.381 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.381 14:38:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:44.639 [2024-07-10 14:38:56.713886] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:44.639 [2024-07-10 14:38:56.713983] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.640 [2024-07-10 14:38:56.832996] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:44.640 [2024-07-10 14:38:56.853419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.640 [2024-07-10 14:38:56.892668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.640 [2024-07-10 14:38:56.892729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.640 [2024-07-10 14:38:56.892742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.640 [2024-07-10 14:38:56.892752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.640 [2024-07-10 14:38:56.892774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.640 [2024-07-10 14:38:56.892811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:45.587 14:38:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:45.846 [2024-07-10 14:38:57.928000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.846 [2024-07-10 14:38:57.943948] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:45.846 [2024-07-10 14:38:57.944160] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.846 [2024-07-10 14:38:57.970620] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:45.846 malloc0 00:20:45.846 14:38:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.846 14:38:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=102024 00:20:45.846 14:38:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:45.846 14:38:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 102024 /var/tmp/bdevperf.sock 00:20:45.846 14:38:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 102024 ']' 00:20:45.846 14:38:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.846 14:38:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:45.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.846 14:38:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.846 14:38:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:45.846 14:38:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:45.846 [2024-07-10 14:38:58.080227] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:20:45.846 [2024-07-10 14:38:58.080348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102024 ] 00:20:46.105 [2024-07-10 14:38:58.202412] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:46.105 [2024-07-10 14:38:58.222114] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.105 [2024-07-10 14:38:58.263465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.035 14:38:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.035 14:38:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:47.035 14:38:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:47.035 [2024-07-10 14:38:59.311965] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.035 [2024-07-10 14:38:59.312099] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:47.293 TLSTESTn1 00:20:47.293 14:38:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:47.293 Running I/O for 10 seconds... 00:20:57.261 00:20:57.261 Latency(us) 00:20:57.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.261 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:57.261 Verification LBA range: start 0x0 length 0x2000 00:20:57.262 TLSTESTn1 : 10.02 3855.71 15.06 0.00 0.00 33139.19 4736.47 41228.10 00:20:57.262 =================================================================================================================== 00:20:57.262 Total : 3855.71 15.06 0.00 0.00 33139.19 4736.47 41228.10 00:20:57.262 0 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:57.520 nvmf_trace.0 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 102024 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 102024 ']' 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 102024 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 102024 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:57.520 killing process with pid 102024 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 102024' 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 102024 00:20:57.520 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.520 00:20:57.520 Latency(us) 00:20:57.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.520 =================================================================================================================== 00:20:57.520 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.520 [2024-07-10 14:39:09.697377] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:57.520 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 102024 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:57.778 rmmod nvme_tcp 00:20:57.778 rmmod nvme_fabrics 00:20:57.778 rmmod nvme_keyring 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 101971 ']' 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 101971 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 101971 ']' 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 101971 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101971 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:57.778 killing process with pid 101971 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101971' 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 101971 00:20:57.778 [2024-07-10 14:39:09.975506] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:57.778 14:39:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 101971 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:58.037 00:20:58.037 real 0m14.216s 00:20:58.037 user 0m19.578s 00:20:58.037 sys 0m5.601s 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:58.037 ************************************ 00:20:58.037 END TEST nvmf_fips 00:20:58.037 ************************************ 00:20:58.037 14:39:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:58.037 14:39:10 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:20:58.037 14:39:10 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:58.037 14:39:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:58.037 14:39:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:58.037 14:39:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:58.037 ************************************ 00:20:58.037 START TEST nvmf_fuzz 00:20:58.037 ************************************ 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:58.037 * Looking for test storage... 00:20:58.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.037 14:39:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:58.300 Cannot find device "nvmf_tgt_br" 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:58.300 Cannot find device "nvmf_tgt_br2" 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:58.300 Cannot find device "nvmf_tgt_br" 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:58.300 Cannot find device "nvmf_tgt_br2" 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:58.300 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:58.300 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:58.300 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:58.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:20:58.558 00:20:58.558 --- 10.0.0.2 ping statistics --- 00:20:58.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.558 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:58.558 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:58.558 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:20:58.558 00:20:58.558 --- 10.0.0.3 ping statistics --- 00:20:58.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.558 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:58.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:58.558 00:20:58.558 --- 10.0.0.1 ping statistics --- 00:20:58.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.558 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=102375 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 102375 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 102375 ']' 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.558 14:39:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.559 14:39:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.559 14:39:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.559 14:39:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:59.934 Malloc0 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.934 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:59.935 14:39:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.935 14:39:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:20:59.935 14:39:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:20:59.935 Shutting down the fuzz application 00:20:59.935 14:39:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:00.193 Shutting down the fuzz application 00:21:00.193 14:39:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:00.193 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.193 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:00.193 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.193 14:39:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:00.193 14:39:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:00.193 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:00.193 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:21:00.193 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:00.193 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:21:00.193 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:00.193 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:00.452 rmmod nvme_tcp 00:21:00.452 rmmod nvme_fabrics 00:21:00.452 rmmod nvme_keyring 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 102375 ']' 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 102375 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 102375 ']' 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 102375 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 102375 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:00.452 killing process with pid 102375 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 102375' 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 102375 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 102375 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.452 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.710 14:39:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:00.710 14:39:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:21:00.710 00:21:00.710 real 0m2.562s 00:21:00.710 user 0m2.592s 00:21:00.710 sys 0m0.590s 00:21:00.710 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:00.710 ************************************ 00:21:00.710 14:39:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:00.710 END TEST nvmf_fuzz 00:21:00.710 ************************************ 00:21:00.710 14:39:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:00.710 14:39:12 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:00.710 14:39:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:00.710 14:39:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.710 14:39:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:00.710 ************************************ 00:21:00.710 START TEST nvmf_multiconnection 00:21:00.710 ************************************ 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:00.710 * Looking for test storage... 00:21:00.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.710 14:39:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:00.711 Cannot find device "nvmf_tgt_br" 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:00.711 Cannot find device "nvmf_tgt_br2" 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:00.711 14:39:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:00.968 Cannot find device "nvmf_tgt_br" 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:00.968 Cannot find device "nvmf_tgt_br2" 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:00.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:00.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:00.968 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:01.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:21:01.225 00:21:01.225 --- 10.0.0.2 ping statistics --- 00:21:01.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.225 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:01.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:01.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:21:01.225 00:21:01.225 --- 10.0.0.3 ping statistics --- 00:21:01.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.225 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:01.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:01.225 00:21:01.225 --- 10.0.0.1 ping statistics --- 00:21:01.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.225 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=102583 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 102583 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 102583 ']' 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.225 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.226 [2024-07-10 14:39:13.348095] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:21:01.226 [2024-07-10 14:39:13.348185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.226 [2024-07-10 14:39:13.474477] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:01.226 [2024-07-10 14:39:13.492550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:01.484 [2024-07-10 14:39:13.534694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.484 [2024-07-10 14:39:13.534758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.484 [2024-07-10 14:39:13.534772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.484 [2024-07-10 14:39:13.534783] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.484 [2024-07-10 14:39:13.534792] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.484 [2024-07-10 14:39:13.534895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.484 [2024-07-10 14:39:13.534958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.484 [2024-07-10 14:39:13.535661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:01.484 [2024-07-10 14:39:13.535695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.484 [2024-07-10 14:39:13.701141] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.484 Malloc1 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.484 [2024-07-10 14:39:13.766129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.484 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 Malloc2 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 Malloc3 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 Malloc4 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 Malloc5 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 Malloc6 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.744 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.745 Malloc7 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.745 14:39:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.745 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.745 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:01.745 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:01.745 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.745 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:01.745 Malloc8 00:21:01.745 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.745 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:01.745 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.745 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.003 Malloc9 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.003 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.004 Malloc10 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.004 Malloc11 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.004 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:02.262 14:39:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:02.262 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:02.262 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:02.262 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:02.262 14:39:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:04.160 14:39:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:04.160 14:39:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:04.160 14:39:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:21:04.160 14:39:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:04.160 14:39:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:04.160 14:39:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:04.160 14:39:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:04.160 14:39:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:21:04.418 14:39:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:04.418 14:39:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:04.418 14:39:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:04.418 14:39:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:04.418 14:39:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:06.318 14:39:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:06.318 14:39:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:06.319 14:39:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:21:06.319 14:39:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:06.319 14:39:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:06.319 14:39:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:06.319 14:39:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:06.319 14:39:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:06.577 14:39:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:06.577 14:39:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:06.577 14:39:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:06.577 14:39:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:06.577 14:39:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:08.480 14:39:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:08.480 14:39:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:08.480 14:39:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:21:08.480 14:39:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:08.480 14:39:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:08.480 14:39:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:08.480 14:39:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:08.480 14:39:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:08.739 14:39:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:08.739 14:39:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:08.739 14:39:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:08.739 14:39:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:08.739 14:39:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:10.642 14:39:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:10.642 14:39:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:10.642 14:39:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:21:10.912 14:39:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:10.912 14:39:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:10.912 14:39:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:10.912 14:39:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:10.912 14:39:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:21:10.912 14:39:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:10.912 14:39:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:10.912 14:39:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:10.912 14:39:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:10.912 14:39:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:13.443 14:39:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:13.443 14:39:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:13.443 14:39:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:21:13.443 14:39:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:13.443 14:39:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:13.443 14:39:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:13.443 14:39:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.443 14:39:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:21:13.443 14:39:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:13.443 14:39:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:13.443 14:39:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:13.443 14:39:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:13.443 14:39:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:15.347 14:39:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:15.347 14:39:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:15.347 14:39:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:21:15.347 14:39:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:15.347 14:39:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:15.347 14:39:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:15.347 14:39:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:15.347 14:39:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:21:15.347 14:39:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:15.347 14:39:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:15.347 14:39:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:15.347 14:39:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:15.347 14:39:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:17.254 14:39:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:17.254 14:39:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:17.254 14:39:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:21:17.255 14:39:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:17.255 14:39:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:17.255 14:39:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:17.255 14:39:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:17.255 14:39:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:21:17.513 14:39:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:17.513 14:39:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:17.513 14:39:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:17.513 14:39:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:17.513 14:39:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:19.413 14:39:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:19.413 14:39:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:19.413 14:39:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:21:19.671 14:39:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:19.671 14:39:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:19.671 14:39:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:19.671 14:39:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:19.671 14:39:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:21:19.671 14:39:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:19.671 14:39:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:19.671 14:39:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:19.671 14:39:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:19.671 14:39:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:22.199 14:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:22.199 14:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:22.199 14:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:21:22.199 14:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:22.199 14:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:22.199 14:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:22.199 14:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.199 14:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:21:22.199 14:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:22.199 14:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:22.199 14:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:22.199 14:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:22.199 14:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:24.097 14:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:24.097 14:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:24.097 14:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:21:24.097 14:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:24.097 14:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:24.097 14:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:24.097 14:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:24.097 14:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:21:24.097 14:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:24.097 14:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:24.097 14:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:24.097 14:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:24.097 14:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:26.626 14:39:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:26.626 14:39:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:26.626 14:39:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:21:26.626 14:39:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:26.626 14:39:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:26.626 14:39:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:26.626 14:39:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:26.626 [global] 00:21:26.626 thread=1 00:21:26.626 invalidate=1 00:21:26.626 rw=read 00:21:26.626 time_based=1 00:21:26.626 runtime=10 00:21:26.626 ioengine=libaio 00:21:26.626 direct=1 00:21:26.626 bs=262144 00:21:26.626 iodepth=64 00:21:26.626 norandommap=1 00:21:26.626 numjobs=1 00:21:26.626 00:21:26.626 [job0] 00:21:26.626 filename=/dev/nvme0n1 00:21:26.626 [job1] 00:21:26.626 filename=/dev/nvme10n1 00:21:26.626 [job2] 00:21:26.626 filename=/dev/nvme1n1 00:21:26.626 [job3] 00:21:26.626 filename=/dev/nvme2n1 00:21:26.626 [job4] 00:21:26.626 filename=/dev/nvme3n1 00:21:26.626 [job5] 00:21:26.626 filename=/dev/nvme4n1 00:21:26.626 [job6] 00:21:26.626 filename=/dev/nvme5n1 00:21:26.626 [job7] 00:21:26.626 filename=/dev/nvme6n1 00:21:26.626 [job8] 00:21:26.626 filename=/dev/nvme7n1 00:21:26.626 [job9] 00:21:26.626 filename=/dev/nvme8n1 00:21:26.626 [job10] 00:21:26.626 filename=/dev/nvme9n1 00:21:26.626 Could not set queue depth (nvme0n1) 00:21:26.626 Could not set queue depth (nvme10n1) 00:21:26.626 Could not set queue depth (nvme1n1) 00:21:26.626 Could not set queue depth (nvme2n1) 00:21:26.626 Could not set queue depth (nvme3n1) 00:21:26.626 Could not set queue depth (nvme4n1) 00:21:26.626 Could not set queue depth (nvme5n1) 00:21:26.626 Could not set queue depth (nvme6n1) 00:21:26.626 Could not set queue depth (nvme7n1) 00:21:26.626 Could not set queue depth (nvme8n1) 00:21:26.626 Could not set queue depth (nvme9n1) 00:21:26.626 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:26.626 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:26.626 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:26.626 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:26.626 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:26.626 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:26.626 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:26.626 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:26.626 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:26.626 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:26.626 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:26.626 fio-3.35 00:21:26.626 Starting 11 threads 00:21:38.847 00:21:38.847 job0: (groupid=0, jobs=1): err= 0: pid=103042: Wed Jul 10 14:39:48 2024 00:21:38.847 read: IOPS=676, BW=169MiB/s (177MB/s)(1707MiB/10092msec) 00:21:38.847 slat (usec): min=14, max=81329, avg=1446.60, stdev=6203.53 00:21:38.847 clat (msec): min=11, max=195, avg=93.02, stdev=34.22 00:21:38.847 lat (msec): min=12, max=201, avg=94.47, stdev=35.14 00:21:38.847 clat percentiles (msec): 00:21:38.847 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 35], 20.00th=[ 58], 00:21:38.847 | 30.00th=[ 70], 40.00th=[ 102], 50.00th=[ 108], 60.00th=[ 111], 00:21:38.847 | 70.00th=[ 115], 80.00th=[ 120], 90.00th=[ 127], 95.00th=[ 133], 00:21:38.847 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 197], 99.95th=[ 197], 00:21:38.847 | 99.99th=[ 197] 00:21:38.847 bw ( KiB/s): min=129024, max=444928, per=9.83%, avg=173194.70, stdev=75634.59, samples=20 00:21:38.847 iops : min= 504, max= 1738, avg=676.45, stdev=295.49, samples=20 00:21:38.847 lat (msec) : 20=1.32%, 50=14.28%, 100=23.27%, 250=61.14% 00:21:38.847 cpu : usr=0.24%, sys=2.32%, ctx=1542, majf=0, minf=4097 00:21:38.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:38.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:38.847 issued rwts: total=6829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:38.847 job1: (groupid=0, jobs=1): err= 0: pid=103043: Wed Jul 10 14:39:48 2024 00:21:38.847 read: IOPS=688, BW=172MiB/s (181MB/s)(1733MiB/10064msec) 00:21:38.847 slat (usec): min=17, max=47916, avg=1437.12, stdev=4990.51 00:21:38.847 clat (msec): min=19, max=146, avg=91.33, stdev=11.67 00:21:38.847 lat (msec): min=19, max=150, avg=92.77, stdev=12.49 00:21:38.847 clat percentiles (msec): 00:21:38.847 | 1.00th=[ 63], 5.00th=[ 75], 10.00th=[ 79], 20.00th=[ 83], 00:21:38.847 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 92], 60.00th=[ 94], 00:21:38.847 | 70.00th=[ 97], 80.00th=[ 101], 90.00th=[ 105], 95.00th=[ 109], 00:21:38.847 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 140], 99.95th=[ 140], 00:21:38.847 | 99.99th=[ 146] 00:21:38.847 bw ( KiB/s): min=155648, max=188928, per=9.98%, avg=175907.50, stdev=8518.07, samples=20 00:21:38.847 iops : min= 608, max= 738, avg=687.05, stdev=33.27, samples=20 00:21:38.847 lat (msec) : 20=0.10%, 50=0.40%, 100=79.06%, 250=20.44% 00:21:38.847 cpu : usr=0.25%, sys=2.10%, ctx=2199, majf=0, minf=4097 00:21:38.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:38.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:38.847 issued rwts: total=6933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:38.847 job2: (groupid=0, jobs=1): err= 0: pid=103044: Wed Jul 10 14:39:48 2024 00:21:38.847 read: IOPS=733, BW=183MiB/s (192MB/s)(1846MiB/10071msec) 00:21:38.847 slat (usec): min=16, max=55520, avg=1351.21, stdev=4977.99 00:21:38.847 clat (msec): min=11, max=141, avg=85.81, stdev=12.46 00:21:38.847 lat (msec): min=11, max=150, avg=87.16, stdev=13.20 00:21:38.847 clat percentiles (msec): 00:21:38.847 | 1.00th=[ 43], 5.00th=[ 68], 10.00th=[ 72], 20.00th=[ 78], 00:21:38.847 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 89], 00:21:38.847 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 100], 95.00th=[ 106], 00:21:38.847 | 99.00th=[ 114], 99.50th=[ 129], 99.90th=[ 142], 99.95th=[ 142], 00:21:38.847 | 99.99th=[ 142] 00:21:38.847 bw ( KiB/s): min=167936, max=205924, per=10.63%, avg=187351.55, stdev=11134.31, samples=20 00:21:38.847 iops : min= 656, max= 804, avg=731.80, stdev=43.44, samples=20 00:21:38.847 lat (msec) : 20=0.26%, 50=0.76%, 100=89.53%, 250=9.45% 00:21:38.847 cpu : usr=0.28%, sys=2.43%, ctx=1714, majf=0, minf=4097 00:21:38.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:21:38.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:38.847 issued rwts: total=7384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:38.847 job3: (groupid=0, jobs=1): err= 0: pid=103045: Wed Jul 10 14:39:48 2024 00:21:38.847 read: IOPS=702, BW=176MiB/s (184MB/s)(1769MiB/10065msec) 00:21:38.847 slat (usec): min=11, max=53929, avg=1402.46, stdev=4957.84 00:21:38.847 clat (msec): min=19, max=143, avg=89.48, stdev=12.20 00:21:38.847 lat (msec): min=19, max=155, avg=90.88, stdev=12.91 00:21:38.847 clat percentiles (msec): 00:21:38.847 | 1.00th=[ 52], 5.00th=[ 72], 10.00th=[ 77], 20.00th=[ 82], 00:21:38.847 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 90], 60.00th=[ 92], 00:21:38.848 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 103], 95.00th=[ 109], 00:21:38.848 | 99.00th=[ 123], 99.50th=[ 133], 99.90th=[ 140], 99.95th=[ 140], 00:21:38.848 | 99.99th=[ 144] 00:21:38.848 bw ( KiB/s): min=142336, max=195704, per=10.19%, avg=179497.15, stdev=11421.19, samples=20 00:21:38.848 iops : min= 556, max= 764, avg=701.10, stdev=44.59, samples=20 00:21:38.848 lat (msec) : 20=0.08%, 50=0.90%, 100=83.86%, 250=15.15% 00:21:38.848 cpu : usr=0.22%, sys=2.20%, ctx=2162, majf=0, minf=4097 00:21:38.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:38.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:38.848 issued rwts: total=7075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:38.848 job4: (groupid=0, jobs=1): err= 0: pid=103046: Wed Jul 10 14:39:48 2024 00:21:38.848 read: IOPS=603, BW=151MiB/s (158MB/s)(1523MiB/10088msec) 00:21:38.848 slat (usec): min=17, max=91170, avg=1620.54, stdev=5747.73 00:21:38.848 clat (msec): min=41, max=182, avg=104.18, stdev=24.56 00:21:38.848 lat (msec): min=42, max=195, avg=105.80, stdev=25.36 00:21:38.848 clat percentiles (msec): 00:21:38.848 | 1.00th=[ 50], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 78], 00:21:38.848 | 30.00th=[ 101], 40.00th=[ 107], 50.00th=[ 112], 60.00th=[ 115], 00:21:38.848 | 70.00th=[ 120], 80.00th=[ 124], 90.00th=[ 129], 95.00th=[ 134], 00:21:38.848 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 163], 99.95th=[ 165], 00:21:38.848 | 99.99th=[ 184] 00:21:38.848 bw ( KiB/s): min=119535, max=261109, per=8.76%, avg=154303.55, stdev=40452.12, samples=20 00:21:38.848 iops : min= 466, max= 1019, avg=602.60, stdev=157.95, samples=20 00:21:38.848 lat (msec) : 50=1.05%, 100=28.25%, 250=70.70% 00:21:38.848 cpu : usr=0.25%, sys=1.87%, ctx=1957, majf=0, minf=4097 00:21:38.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:21:38.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:38.848 issued rwts: total=6092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:38.848 job5: (groupid=0, jobs=1): err= 0: pid=103047: Wed Jul 10 14:39:48 2024 00:21:38.848 read: IOPS=556, BW=139MiB/s (146MB/s)(1407MiB/10102msec) 00:21:38.848 slat (usec): min=14, max=78619, avg=1761.77, stdev=7063.81 00:21:38.848 clat (msec): min=15, max=188, avg=113.02, stdev=15.58 00:21:38.848 lat (msec): min=16, max=195, avg=114.78, stdev=16.93 00:21:38.848 clat percentiles (msec): 00:21:38.848 | 1.00th=[ 61], 5.00th=[ 88], 10.00th=[ 95], 20.00th=[ 105], 00:21:38.848 | 30.00th=[ 108], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 117], 00:21:38.848 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 128], 95.00th=[ 133], 00:21:38.848 | 99.00th=[ 163], 99.50th=[ 171], 99.90th=[ 188], 99.95th=[ 188], 00:21:38.848 | 99.99th=[ 188] 00:21:38.848 bw ( KiB/s): min=129024, max=169472, per=8.08%, avg=142357.15, stdev=11199.05, samples=20 00:21:38.848 iops : min= 504, max= 662, avg=556.00, stdev=43.69, samples=20 00:21:38.848 lat (msec) : 20=0.09%, 50=0.78%, 100=12.98%, 250=86.15% 00:21:38.848 cpu : usr=0.23%, sys=1.89%, ctx=1095, majf=0, minf=4097 00:21:38.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:38.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:38.848 issued rwts: total=5626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:38.848 job6: (groupid=0, jobs=1): err= 0: pid=103048: Wed Jul 10 14:39:48 2024 00:21:38.848 read: IOPS=551, BW=138MiB/s (145MB/s)(1393MiB/10097msec) 00:21:38.848 slat (usec): min=17, max=84087, avg=1789.77, stdev=7086.39 00:21:38.848 clat (msec): min=32, max=202, avg=114.02, stdev=15.24 00:21:38.848 lat (msec): min=33, max=202, avg=115.81, stdev=16.74 00:21:38.848 clat percentiles (msec): 00:21:38.848 | 1.00th=[ 68], 5.00th=[ 89], 10.00th=[ 99], 20.00th=[ 105], 00:21:38.848 | 30.00th=[ 108], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 118], 00:21:38.848 | 70.00th=[ 122], 80.00th=[ 125], 90.00th=[ 131], 95.00th=[ 136], 00:21:38.848 | 99.00th=[ 150], 99.50th=[ 163], 99.90th=[ 188], 99.95th=[ 203], 00:21:38.848 | 99.99th=[ 203] 00:21:38.848 bw ( KiB/s): min=128000, max=169298, per=8.00%, avg=140993.00, stdev=11090.29, samples=20 00:21:38.848 iops : min= 500, max= 661, avg=550.55, stdev=43.39, samples=20 00:21:38.848 lat (msec) : 50=0.52%, 100=11.81%, 250=87.67% 00:21:38.848 cpu : usr=0.24%, sys=1.66%, ctx=1568, majf=0, minf=4097 00:21:38.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:38.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:38.848 issued rwts: total=5572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:38.848 job7: (groupid=0, jobs=1): err= 0: pid=103049: Wed Jul 10 14:39:48 2024 00:21:38.848 read: IOPS=594, BW=149MiB/s (156MB/s)(1500MiB/10090msec) 00:21:38.848 slat (usec): min=17, max=72919, avg=1663.90, stdev=6126.10 00:21:38.848 clat (msec): min=36, max=204, avg=105.86, stdev=25.94 00:21:38.848 lat (msec): min=37, max=209, avg=107.53, stdev=26.85 00:21:38.848 clat percentiles (msec): 00:21:38.848 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 63], 20.00th=[ 80], 00:21:38.848 | 30.00th=[ 102], 40.00th=[ 111], 50.00th=[ 114], 60.00th=[ 117], 00:21:38.848 | 70.00th=[ 122], 80.00th=[ 125], 90.00th=[ 130], 95.00th=[ 136], 00:21:38.848 | 99.00th=[ 161], 99.50th=[ 180], 99.90th=[ 205], 99.95th=[ 205], 00:21:38.848 | 99.99th=[ 205] 00:21:38.848 bw ( KiB/s): min=117760, max=260608, per=8.62%, avg=151908.50, stdev=40605.10, samples=20 00:21:38.848 iops : min= 460, max= 1018, avg=593.30, stdev=158.49, samples=20 00:21:38.848 lat (msec) : 50=1.60%, 100=26.74%, 250=71.66% 00:21:38.848 cpu : usr=0.23%, sys=1.98%, ctx=1587, majf=0, minf=4097 00:21:38.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:21:38.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:38.848 issued rwts: total=5999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:38.848 job8: (groupid=0, jobs=1): err= 0: pid=103050: Wed Jul 10 14:39:48 2024 00:21:38.848 read: IOPS=539, BW=135MiB/s (141MB/s)(1362MiB/10102msec) 00:21:38.848 slat (usec): min=18, max=78983, avg=1830.52, stdev=6604.77 00:21:38.848 clat (msec): min=23, max=212, avg=116.62, stdev=15.14 00:21:38.848 lat (msec): min=24, max=212, avg=118.45, stdev=16.43 00:21:38.848 clat percentiles (msec): 00:21:38.848 | 1.00th=[ 71], 5.00th=[ 91], 10.00th=[ 100], 20.00th=[ 108], 00:21:38.848 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 118], 60.00th=[ 121], 00:21:38.848 | 70.00th=[ 125], 80.00th=[ 127], 90.00th=[ 132], 95.00th=[ 138], 00:21:38.848 | 99.00th=[ 148], 99.50th=[ 159], 99.90th=[ 213], 99.95th=[ 213], 00:21:38.848 | 99.99th=[ 213] 00:21:38.848 bw ( KiB/s): min=122880, max=162816, per=7.82%, avg=137854.05, stdev=9827.65, samples=20 00:21:38.848 iops : min= 480, max= 636, avg=538.40, stdev=38.39, samples=20 00:21:38.848 lat (msec) : 50=0.37%, 100=10.20%, 250=89.43% 00:21:38.848 cpu : usr=0.23%, sys=2.09%, ctx=1112, majf=0, minf=4097 00:21:38.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:38.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:38.848 issued rwts: total=5449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:38.848 job9: (groupid=0, jobs=1): err= 0: pid=103051: Wed Jul 10 14:39:48 2024 00:21:38.848 read: IOPS=721, BW=180MiB/s (189MB/s)(1820MiB/10093msec) 00:21:38.848 slat (usec): min=14, max=69808, avg=1360.87, stdev=5177.12 00:21:38.848 clat (msec): min=13, max=199, avg=87.21, stdev=40.37 00:21:38.848 lat (msec): min=13, max=200, avg=88.57, stdev=41.21 00:21:38.848 clat percentiles (msec): 00:21:38.848 | 1.00th=[ 20], 5.00th=[ 25], 10.00th=[ 30], 20.00th=[ 37], 00:21:38.848 | 30.00th=[ 58], 40.00th=[ 71], 50.00th=[ 105], 60.00th=[ 114], 00:21:38.848 | 70.00th=[ 118], 80.00th=[ 124], 90.00th=[ 130], 95.00th=[ 136], 00:21:38.848 | 99.00th=[ 153], 99.50th=[ 171], 99.90th=[ 201], 99.95th=[ 201], 00:21:38.848 | 99.99th=[ 201] 00:21:38.848 bw ( KiB/s): min=110080, max=487936, per=10.48%, avg=184728.00, stdev=109557.35, samples=20 00:21:38.848 iops : min= 430, max= 1906, avg=721.55, stdev=427.98, samples=20 00:21:38.848 lat (msec) : 20=1.62%, 50=23.24%, 100=21.21%, 250=53.93% 00:21:38.848 cpu : usr=0.30%, sys=2.51%, ctx=1709, majf=0, minf=4097 00:21:38.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:21:38.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:38.848 issued rwts: total=7280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:38.848 job10: (groupid=0, jobs=1): err= 0: pid=103052: Wed Jul 10 14:39:48 2024 00:21:38.848 read: IOPS=524, BW=131MiB/s (137MB/s)(1324MiB/10100msec) 00:21:38.848 slat (usec): min=15, max=64420, avg=1862.22, stdev=6374.00 00:21:38.848 clat (msec): min=17, max=205, avg=120.02, stdev=14.02 00:21:38.848 lat (msec): min=18, max=205, avg=121.88, stdev=15.28 00:21:38.848 clat percentiles (msec): 00:21:38.848 | 1.00th=[ 72], 5.00th=[ 103], 10.00th=[ 107], 20.00th=[ 111], 00:21:38.848 | 30.00th=[ 115], 40.00th=[ 117], 50.00th=[ 121], 60.00th=[ 124], 00:21:38.848 | 70.00th=[ 127], 80.00th=[ 129], 90.00th=[ 134], 95.00th=[ 138], 00:21:38.848 | 99.00th=[ 155], 99.50th=[ 182], 99.90th=[ 205], 99.95th=[ 205], 00:21:38.848 | 99.99th=[ 205] 00:21:38.848 bw ( KiB/s): min=123392, max=146944, per=7.60%, avg=133912.50, stdev=7099.43, samples=20 00:21:38.848 iops : min= 482, max= 574, avg=522.95, stdev=27.77, samples=20 00:21:38.848 lat (msec) : 20=0.11%, 50=0.15%, 100=2.85%, 250=96.88% 00:21:38.848 cpu : usr=0.19%, sys=2.04%, ctx=1248, majf=0, minf=4097 00:21:38.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:38.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:38.848 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:38.848 00:21:38.848 Run status group 0 (all jobs): 00:21:38.848 READ: bw=1721MiB/s (1804MB/s), 131MiB/s-183MiB/s (137MB/s-192MB/s), io=17.0GiB (18.2GB), run=10064-10102msec 00:21:38.848 00:21:38.848 Disk stats (read/write): 00:21:38.848 nvme0n1: ios=13555/0, merge=0/0, ticks=1242914/0, in_queue=1242914, util=97.91% 00:21:38.848 nvme10n1: ios=13752/0, merge=0/0, ticks=1241728/0, in_queue=1241728, util=97.79% 00:21:38.848 nvme1n1: ios=14692/0, merge=0/0, ticks=1244523/0, in_queue=1244523, util=98.15% 00:21:38.848 nvme2n1: ios=14022/0, merge=0/0, ticks=1238096/0, in_queue=1238096, util=98.08% 00:21:38.848 nvme3n1: ios=12056/0, merge=0/0, ticks=1241530/0, in_queue=1241530, util=98.06% 00:21:38.848 nvme4n1: ios=11124/0, merge=0/0, ticks=1239201/0, in_queue=1239201, util=98.54% 00:21:38.848 nvme5n1: ios=11024/0, merge=0/0, ticks=1240534/0, in_queue=1240534, util=98.49% 00:21:38.849 nvme6n1: ios=11875/0, merge=0/0, ticks=1237942/0, in_queue=1237942, util=98.73% 00:21:38.849 nvme7n1: ios=10788/0, merge=0/0, ticks=1243403/0, in_queue=1243403, util=98.92% 00:21:38.849 nvme8n1: ios=14469/0, merge=0/0, ticks=1239908/0, in_queue=1239908, util=99.13% 00:21:38.849 nvme9n1: ios=10476/0, merge=0/0, ticks=1244021/0, in_queue=1244021, util=99.20% 00:21:38.849 14:39:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:38.849 [global] 00:21:38.849 thread=1 00:21:38.849 invalidate=1 00:21:38.849 rw=randwrite 00:21:38.849 time_based=1 00:21:38.849 runtime=10 00:21:38.849 ioengine=libaio 00:21:38.849 direct=1 00:21:38.849 bs=262144 00:21:38.849 iodepth=64 00:21:38.849 norandommap=1 00:21:38.849 numjobs=1 00:21:38.849 00:21:38.849 [job0] 00:21:38.849 filename=/dev/nvme0n1 00:21:38.849 [job1] 00:21:38.849 filename=/dev/nvme10n1 00:21:38.849 [job2] 00:21:38.849 filename=/dev/nvme1n1 00:21:38.849 [job3] 00:21:38.849 filename=/dev/nvme2n1 00:21:38.849 [job4] 00:21:38.849 filename=/dev/nvme3n1 00:21:38.849 [job5] 00:21:38.849 filename=/dev/nvme4n1 00:21:38.849 [job6] 00:21:38.849 filename=/dev/nvme5n1 00:21:38.849 [job7] 00:21:38.849 filename=/dev/nvme6n1 00:21:38.849 [job8] 00:21:38.849 filename=/dev/nvme7n1 00:21:38.849 [job9] 00:21:38.849 filename=/dev/nvme8n1 00:21:38.849 [job10] 00:21:38.849 filename=/dev/nvme9n1 00:21:38.849 Could not set queue depth (nvme0n1) 00:21:38.849 Could not set queue depth (nvme10n1) 00:21:38.849 Could not set queue depth (nvme1n1) 00:21:38.849 Could not set queue depth (nvme2n1) 00:21:38.849 Could not set queue depth (nvme3n1) 00:21:38.849 Could not set queue depth (nvme4n1) 00:21:38.849 Could not set queue depth (nvme5n1) 00:21:38.849 Could not set queue depth (nvme6n1) 00:21:38.849 Could not set queue depth (nvme7n1) 00:21:38.849 Could not set queue depth (nvme8n1) 00:21:38.849 Could not set queue depth (nvme9n1) 00:21:38.849 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:38.849 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:38.849 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:38.849 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:38.849 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:38.849 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:38.849 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:38.849 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:38.849 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:38.849 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:38.849 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:38.849 fio-3.35 00:21:38.849 Starting 11 threads 00:21:48.870 00:21:48.870 job0: (groupid=0, jobs=1): err= 0: pid=103255: Wed Jul 10 14:39:59 2024 00:21:48.870 write: IOPS=558, BW=140MiB/s (146MB/s)(1410MiB/10098msec); 0 zone resets 00:21:48.870 slat (usec): min=17, max=12138, avg=1768.40, stdev=3011.93 00:21:48.870 clat (msec): min=13, max=208, avg=112.81, stdev=10.37 00:21:48.870 lat (msec): min=13, max=208, avg=114.58, stdev=10.08 00:21:48.870 clat percentiles (msec): 00:21:48.870 | 1.00th=[ 103], 5.00th=[ 105], 10.00th=[ 106], 20.00th=[ 109], 00:21:48.870 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 112], 60.00th=[ 113], 00:21:48.870 | 70.00th=[ 114], 80.00th=[ 116], 90.00th=[ 122], 95.00th=[ 123], 00:21:48.870 | 99.00th=[ 148], 99.50th=[ 161], 99.90th=[ 201], 99.95th=[ 201], 00:21:48.870 | 99.99th=[ 209] 00:21:48.870 bw ( KiB/s): min=118784, max=149504, per=8.45%, avg=142720.00, stdev=6990.40, samples=20 00:21:48.870 iops : min= 464, max= 584, avg=557.50, stdev=27.31, samples=20 00:21:48.870 lat (msec) : 20=0.07%, 50=0.28%, 100=0.44%, 250=99.20% 00:21:48.870 cpu : usr=0.90%, sys=1.50%, ctx=8753, majf=0, minf=1 00:21:48.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:48.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:48.870 issued rwts: total=0,5638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:48.870 job1: (groupid=0, jobs=1): err= 0: pid=103256: Wed Jul 10 14:39:59 2024 00:21:48.870 write: IOPS=501, BW=125MiB/s (131MB/s)(1266MiB/10098msec); 0 zone resets 00:21:48.870 slat (usec): min=16, max=19237, avg=1916.91, stdev=3448.97 00:21:48.870 clat (msec): min=2, max=202, avg=125.71, stdev=27.85 00:21:48.870 lat (msec): min=2, max=202, avg=127.62, stdev=28.17 00:21:48.870 clat percentiles (msec): 00:21:48.870 | 1.00th=[ 43], 5.00th=[ 102], 10.00th=[ 104], 20.00th=[ 109], 00:21:48.870 | 30.00th=[ 110], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 136], 00:21:48.870 | 70.00th=[ 144], 80.00th=[ 148], 90.00th=[ 161], 95.00th=[ 178], 00:21:48.870 | 99.00th=[ 188], 99.50th=[ 188], 99.90th=[ 197], 99.95th=[ 197], 00:21:48.870 | 99.99th=[ 203] 00:21:48.870 bw ( KiB/s): min=90292, max=151552, per=7.58%, avg=128059.25, stdev=21612.07, samples=20 00:21:48.870 iops : min= 352, max= 592, avg=499.95, stdev=84.61, samples=20 00:21:48.870 lat (msec) : 4=0.06%, 10=0.08%, 20=0.08%, 50=1.11%, 100=2.84% 00:21:48.870 lat (msec) : 250=95.83% 00:21:48.870 cpu : usr=0.82%, sys=1.56%, ctx=6352, majf=0, minf=1 00:21:48.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:48.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:48.870 issued rwts: total=0,5062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:48.870 job2: (groupid=0, jobs=1): err= 0: pid=103268: Wed Jul 10 14:39:59 2024 00:21:48.870 write: IOPS=659, BW=165MiB/s (173MB/s)(1663MiB/10088msec); 0 zone resets 00:21:48.870 slat (usec): min=17, max=18873, avg=1484.99, stdev=2610.13 00:21:48.870 clat (msec): min=6, max=194, avg=95.54, stdev=20.17 00:21:48.870 lat (msec): min=6, max=194, avg=97.03, stdev=20.36 00:21:48.870 clat percentiles (msec): 00:21:48.870 | 1.00th=[ 30], 5.00th=[ 70], 10.00th=[ 71], 20.00th=[ 74], 00:21:48.870 | 30.00th=[ 75], 40.00th=[ 103], 50.00th=[ 106], 60.00th=[ 109], 00:21:48.870 | 70.00th=[ 111], 80.00th=[ 111], 90.00th=[ 113], 95.00th=[ 113], 00:21:48.870 | 99.00th=[ 117], 99.50th=[ 140], 99.90th=[ 182], 99.95th=[ 188], 00:21:48.870 | 99.99th=[ 194] 00:21:48.870 bw ( KiB/s): min=145408, max=227840, per=9.98%, avg=168667.35, stdev=32830.57, samples=20 00:21:48.870 iops : min= 568, max= 890, avg=658.85, stdev=128.25, samples=20 00:21:48.870 lat (msec) : 10=0.09%, 20=0.44%, 50=1.26%, 100=34.88%, 250=63.33% 00:21:48.870 cpu : usr=0.85%, sys=1.84%, ctx=7908, majf=0, minf=1 00:21:48.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:48.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:48.870 issued rwts: total=0,6651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:48.870 job3: (groupid=0, jobs=1): err= 0: pid=103269: Wed Jul 10 14:39:59 2024 00:21:48.870 write: IOPS=493, BW=123MiB/s (129MB/s)(1245MiB/10091msec); 0 zone resets 00:21:48.870 slat (usec): min=16, max=27496, avg=1972.35, stdev=3528.20 00:21:48.870 clat (msec): min=12, max=197, avg=127.72, stdev=26.34 00:21:48.870 lat (msec): min=12, max=197, avg=129.69, stdev=26.60 00:21:48.870 clat percentiles (msec): 00:21:48.870 | 1.00th=[ 52], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 110], 00:21:48.870 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 116], 60.00th=[ 138], 00:21:48.870 | 70.00th=[ 146], 80.00th=[ 150], 90.00th=[ 159], 95.00th=[ 178], 00:21:48.870 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 192], 99.95th=[ 192], 00:21:48.870 | 99.99th=[ 199] 00:21:48.870 bw ( KiB/s): min=88064, max=147968, per=7.45%, avg=125824.00, stdev=20687.70, samples=20 00:21:48.870 iops : min= 344, max= 578, avg=491.50, stdev=80.81, samples=20 00:21:48.870 lat (msec) : 20=0.14%, 50=0.84%, 100=2.19%, 250=96.83% 00:21:48.870 cpu : usr=0.78%, sys=1.34%, ctx=6404, majf=0, minf=1 00:21:48.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:48.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:48.870 issued rwts: total=0,4978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:48.870 job4: (groupid=0, jobs=1): err= 0: pid=103270: Wed Jul 10 14:39:59 2024 00:21:48.870 write: IOPS=557, BW=139MiB/s (146MB/s)(1408MiB/10095msec); 0 zone resets 00:21:48.870 slat (usec): min=17, max=38458, avg=1771.35, stdev=3047.83 00:21:48.870 clat (msec): min=40, max=200, avg=112.95, stdev= 9.21 00:21:48.870 lat (msec): min=40, max=200, avg=114.72, stdev= 8.83 00:21:48.870 clat percentiles (msec): 00:21:48.870 | 1.00th=[ 104], 5.00th=[ 105], 10.00th=[ 106], 20.00th=[ 110], 00:21:48.870 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 112], 60.00th=[ 113], 00:21:48.870 | 70.00th=[ 114], 80.00th=[ 116], 90.00th=[ 122], 95.00th=[ 123], 00:21:48.870 | 99.00th=[ 148], 99.50th=[ 165], 99.90th=[ 194], 99.95th=[ 194], 00:21:48.870 | 99.99th=[ 201] 00:21:48.870 bw ( KiB/s): min=112640, max=147456, per=8.43%, avg=142515.20, stdev=8011.96, samples=20 00:21:48.870 iops : min= 440, max= 576, avg=556.70, stdev=31.30, samples=20 00:21:48.870 lat (msec) : 50=0.14%, 100=0.44%, 250=99.41% 00:21:48.870 cpu : usr=0.94%, sys=1.57%, ctx=7136, majf=0, minf=1 00:21:48.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:48.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:48.870 issued rwts: total=0,5630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:48.870 job5: (groupid=0, jobs=1): err= 0: pid=103271: Wed Jul 10 14:39:59 2024 00:21:48.870 write: IOPS=419, BW=105MiB/s (110MB/s)(1070MiB/10199msec); 0 zone resets 00:21:48.870 slat (usec): min=18, max=26364, avg=2333.09, stdev=4071.08 00:21:48.870 clat (msec): min=5, max=341, avg=150.16, stdev=24.48 00:21:48.870 lat (msec): min=5, max=341, avg=152.49, stdev=24.42 00:21:48.870 clat percentiles (msec): 00:21:48.870 | 1.00th=[ 61], 5.00th=[ 131], 10.00th=[ 138], 20.00th=[ 142], 00:21:48.870 | 30.00th=[ 146], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 150], 00:21:48.870 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 176], 95.00th=[ 186], 00:21:48.870 | 99.00th=[ 239], 99.50th=[ 296], 99.90th=[ 334], 99.95th=[ 334], 00:21:48.870 | 99.99th=[ 342] 00:21:48.870 bw ( KiB/s): min=88064, max=130810, per=6.39%, avg=107890.90, stdev=9108.85, samples=20 00:21:48.870 iops : min= 344, max= 510, avg=421.40, stdev=35.45, samples=20 00:21:48.870 lat (msec) : 10=0.09%, 20=0.28%, 50=0.47%, 100=0.70%, 250=97.57% 00:21:48.871 lat (msec) : 500=0.89% 00:21:48.871 cpu : usr=0.79%, sys=1.26%, ctx=4089, majf=0, minf=1 00:21:48.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:21:48.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:48.871 issued rwts: total=0,4278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:48.871 job6: (groupid=0, jobs=1): err= 0: pid=103272: Wed Jul 10 14:39:59 2024 00:21:48.871 write: IOPS=916, BW=229MiB/s (240MB/s)(2313MiB/10097msec); 0 zone resets 00:21:48.871 slat (usec): min=15, max=9149, avg=1076.67, stdev=2023.57 00:21:48.871 clat (msec): min=14, max=205, avg=68.74, stdev=30.02 00:21:48.871 lat (msec): min=14, max=205, avg=69.82, stdev=30.43 00:21:48.871 clat percentiles (msec): 00:21:48.871 | 1.00th=[ 37], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 40], 00:21:48.871 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 71], 60.00th=[ 75], 00:21:48.871 | 70.00th=[ 79], 80.00th=[ 109], 90.00th=[ 111], 95.00th=[ 112], 00:21:48.871 | 99.00th=[ 116], 99.50th=[ 123], 99.90th=[ 192], 99.95th=[ 199], 00:21:48.871 | 99.99th=[ 205] 00:21:48.871 bw ( KiB/s): min=147751, max=413696, per=13.94%, avg=235468.20, stdev=105896.56, samples=20 00:21:48.871 iops : min= 577, max= 1616, avg=919.60, stdev=413.60, samples=20 00:21:48.871 lat (msec) : 20=0.09%, 50=44.44%, 100=26.08%, 250=29.39% 00:21:48.871 cpu : usr=1.31%, sys=2.06%, ctx=12330, majf=0, minf=1 00:21:48.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:48.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:48.871 issued rwts: total=0,9252,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:48.871 job7: (groupid=0, jobs=1): err= 0: pid=103273: Wed Jul 10 14:39:59 2024 00:21:48.871 write: IOPS=1132, BW=283MiB/s (297MB/s)(2844MiB/10044msec); 0 zone resets 00:21:48.871 slat (usec): min=15, max=47997, avg=874.46, stdev=1699.84 00:21:48.871 clat (msec): min=36, max=139, avg=55.62, stdev=24.24 00:21:48.871 lat (msec): min=36, max=139, avg=56.49, stdev=24.58 00:21:48.871 clat percentiles (msec): 00:21:48.871 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:21:48.871 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 47], 00:21:48.871 | 70.00th=[ 53], 80.00th=[ 77], 90.00th=[ 107], 95.00th=[ 112], 00:21:48.871 | 99.00th=[ 114], 99.50th=[ 114], 99.90th=[ 121], 99.95th=[ 132], 00:21:48.871 | 99.99th=[ 140] 00:21:48.871 bw ( KiB/s): min=133120, max=397824, per=17.14%, avg=289612.80, stdev=106393.90, samples=20 00:21:48.871 iops : min= 520, max= 1554, avg=1131.30, stdev=415.60, samples=20 00:21:48.871 lat (msec) : 50=66.66%, 100=20.12%, 250=13.22% 00:21:48.871 cpu : usr=1.76%, sys=2.57%, ctx=13818, majf=0, minf=1 00:21:48.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:48.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:48.871 issued rwts: total=0,11376,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:48.871 job8: (groupid=0, jobs=1): err= 0: pid=103274: Wed Jul 10 14:39:59 2024 00:21:48.871 write: IOPS=424, BW=106MiB/s (111MB/s)(1081MiB/10196msec); 0 zone resets 00:21:48.871 slat (usec): min=17, max=23305, avg=2289.18, stdev=4025.64 00:21:48.871 clat (msec): min=2, max=338, avg=148.55, stdev=25.76 00:21:48.871 lat (msec): min=2, max=338, avg=150.84, stdev=25.81 00:21:48.871 clat percentiles (msec): 00:21:48.871 | 1.00th=[ 44], 5.00th=[ 122], 10.00th=[ 136], 20.00th=[ 140], 00:21:48.871 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 148], 00:21:48.871 | 70.00th=[ 150], 80.00th=[ 155], 90.00th=[ 176], 95.00th=[ 186], 00:21:48.871 | 99.00th=[ 234], 99.50th=[ 292], 99.90th=[ 330], 99.95th=[ 330], 00:21:48.871 | 99.99th=[ 338] 00:21:48.871 bw ( KiB/s): min=88240, max=143872, per=6.46%, avg=109151.00, stdev=11248.58, samples=20 00:21:48.871 iops : min= 344, max= 562, avg=426.00, stdev=44.03, samples=20 00:21:48.871 lat (msec) : 4=0.05%, 20=0.21%, 50=0.90%, 100=1.43%, 250=96.53% 00:21:48.871 lat (msec) : 500=0.88% 00:21:48.871 cpu : usr=0.64%, sys=1.12%, ctx=4673, majf=0, minf=1 00:21:48.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:21:48.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:48.871 issued rwts: total=0,4324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:48.871 job9: (groupid=0, jobs=1): err= 0: pid=103275: Wed Jul 10 14:39:59 2024 00:21:48.871 write: IOPS=497, BW=124MiB/s (130MB/s)(1268MiB/10197msec); 0 zone resets 00:21:48.871 slat (usec): min=16, max=29431, avg=1955.44, stdev=3586.47 00:21:48.871 clat (msec): min=2, max=340, avg=126.65, stdev=38.62 00:21:48.871 lat (msec): min=2, max=340, avg=128.61, stdev=38.97 00:21:48.871 clat percentiles (msec): 00:21:48.871 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 81], 00:21:48.871 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 146], 00:21:48.871 | 70.00th=[ 150], 80.00th=[ 155], 90.00th=[ 167], 95.00th=[ 190], 00:21:48.871 | 99.00th=[ 220], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 330], 00:21:48.871 | 99.99th=[ 342] 00:21:48.871 bw ( KiB/s): min=85504, max=215552, per=7.59%, avg=128230.40, stdev=38487.20, samples=20 00:21:48.871 iops : min= 334, max= 842, avg=500.90, stdev=150.34, samples=20 00:21:48.871 lat (msec) : 4=0.02%, 10=0.08%, 50=0.47%, 100=21.14%, 250=77.54% 00:21:48.871 lat (msec) : 500=0.75% 00:21:48.871 cpu : usr=1.02%, sys=1.07%, ctx=6030, majf=0, minf=1 00:21:48.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:48.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:48.871 issued rwts: total=0,5072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:48.871 job10: (groupid=0, jobs=1): err= 0: pid=103276: Wed Jul 10 14:39:59 2024 00:21:48.871 write: IOPS=497, BW=124MiB/s (130MB/s)(1269MiB/10202msec); 0 zone resets 00:21:48.871 slat (usec): min=17, max=35004, avg=1933.04, stdev=3470.84 00:21:48.871 clat (msec): min=11, max=341, avg=126.66, stdev=29.92 00:21:48.871 lat (msec): min=12, max=341, avg=128.59, stdev=30.14 00:21:48.871 clat percentiles (msec): 00:21:48.871 | 1.00th=[ 31], 5.00th=[ 105], 10.00th=[ 106], 20.00th=[ 111], 00:21:48.871 | 30.00th=[ 112], 40.00th=[ 113], 50.00th=[ 121], 60.00th=[ 136], 00:21:48.871 | 70.00th=[ 146], 80.00th=[ 150], 90.00th=[ 155], 95.00th=[ 159], 00:21:48.871 | 99.00th=[ 222], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:21:48.871 | 99.99th=[ 342] 00:21:48.871 bw ( KiB/s): min=104448, max=189952, per=7.59%, avg=128281.60, stdev=23054.35, samples=20 00:21:48.871 iops : min= 408, max= 742, avg=501.10, stdev=90.06, samples=20 00:21:48.871 lat (msec) : 20=0.33%, 50=2.03%, 100=1.71%, 250=95.17%, 500=0.75% 00:21:48.871 cpu : usr=0.91%, sys=1.35%, ctx=5948, majf=0, minf=1 00:21:48.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:48.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:48.871 issued rwts: total=0,5075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:48.871 00:21:48.871 Run status group 0 (all jobs): 00:21:48.871 WRITE: bw=1650MiB/s (1730MB/s), 105MiB/s-283MiB/s (110MB/s-297MB/s), io=16.4GiB (17.7GB), run=10044-10202msec 00:21:48.871 00:21:48.871 Disk stats (read/write): 00:21:48.871 nvme0n1: ios=49/11143, merge=0/0, ticks=31/1215456, in_queue=1215487, util=97.79% 00:21:48.871 nvme10n1: ios=49/9998, merge=0/0, ticks=35/1217443, in_queue=1217478, util=98.06% 00:21:48.871 nvme1n1: ios=0/13162, merge=0/0, ticks=0/1214660, in_queue=1214660, util=98.01% 00:21:48.871 nvme2n1: ios=0/9799, merge=0/0, ticks=0/1213716, in_queue=1213716, util=98.00% 00:21:48.871 nvme3n1: ios=0/11107, merge=0/0, ticks=0/1213959, in_queue=1213959, util=98.10% 00:21:48.871 nvme4n1: ios=0/8423, merge=0/0, ticks=0/1213274, in_queue=1213274, util=98.43% 00:21:48.871 nvme5n1: ios=0/18380, merge=0/0, ticks=0/1216072, in_queue=1216072, util=98.58% 00:21:48.871 nvme6n1: ios=0/22571, merge=0/0, ticks=0/1217593, in_queue=1217593, util=98.47% 00:21:48.871 nvme7n1: ios=0/8522, merge=0/0, ticks=0/1214777, in_queue=1214777, util=98.89% 00:21:48.871 nvme8n1: ios=0/10008, merge=0/0, ticks=0/1213099, in_queue=1213099, util=98.88% 00:21:48.871 nvme9n1: ios=0/10010, merge=0/0, ticks=0/1213845, in_queue=1213845, util=98.94% 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:48.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.871 14:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:48.871 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:48.871 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:48.871 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:48.871 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:48.871 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:21:48.871 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:21:48.871 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:48.871 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:48.871 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:48.871 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:48.872 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:48.872 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:48.872 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:48.872 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:48.872 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:48.872 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:48.872 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:48.872 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:48.872 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:48.872 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:48.873 rmmod nvme_tcp 00:21:48.873 rmmod nvme_fabrics 00:21:48.873 rmmod nvme_keyring 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 102583 ']' 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 102583 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 102583 ']' 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 102583 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 102583 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:48.873 killing process with pid 102583 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 102583' 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 102583 00:21:48.873 14:40:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 102583 00:21:49.131 14:40:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:49.131 14:40:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:49.131 14:40:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:49.131 14:40:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:49.131 14:40:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:49.131 14:40:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.131 14:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.131 14:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.131 14:40:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:49.131 00:21:49.131 real 0m48.445s 00:21:49.131 user 2m42.636s 00:21:49.131 sys 0m23.893s 00:21:49.131 14:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:49.131 14:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:49.131 ************************************ 00:21:49.131 END TEST nvmf_multiconnection 00:21:49.131 ************************************ 00:21:49.131 14:40:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:49.131 14:40:01 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:49.131 14:40:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:49.131 14:40:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:49.131 14:40:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:49.131 ************************************ 00:21:49.131 START TEST nvmf_initiator_timeout 00:21:49.131 ************************************ 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:49.131 * Looking for test storage... 00:21:49.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.131 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:49.389 Cannot find device "nvmf_tgt_br" 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:49.389 Cannot find device "nvmf_tgt_br2" 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:49.389 Cannot find device "nvmf_tgt_br" 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:49.389 Cannot find device "nvmf_tgt_br2" 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:49.389 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:49.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:49.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:49.390 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:49.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:21:49.649 00:21:49.649 --- 10.0.0.2 ping statistics --- 00:21:49.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.649 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:49.649 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:49.649 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:21:49.649 00:21:49.649 --- 10.0.0.3 ping statistics --- 00:21:49.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.649 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:49.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:21:49.649 00:21:49.649 --- 10.0.0.1 ping statistics --- 00:21:49.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.649 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=103640 00:21:49.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 103640 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 103640 ']' 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.649 14:40:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:49.649 [2024-07-10 14:40:01.832057] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:21:49.649 [2024-07-10 14:40:01.832161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.908 [2024-07-10 14:40:01.951424] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:49.908 [2024-07-10 14:40:01.966572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.908 [2024-07-10 14:40:02.009686] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.908 [2024-07-10 14:40:02.009764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.908 [2024-07-10 14:40:02.009784] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.908 [2024-07-10 14:40:02.009798] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.908 [2024-07-10 14:40:02.009810] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.908 [2024-07-10 14:40:02.009985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.908 [2024-07-10 14:40:02.010127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.908 [2024-07-10 14:40:02.010874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.908 [2024-07-10 14:40:02.010890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:49.908 Malloc0 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:49.908 Delay0 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.908 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:49.908 [2024-07-10 14:40:02.191798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.166 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.166 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:50.166 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.166 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:50.166 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.166 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:50.167 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.167 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:50.167 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.167 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.167 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.167 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:50.167 [2024-07-10 14:40:02.219984] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.167 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.167 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:50.167 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:21:50.167 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:21:50.167 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:50.167 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:50.167 14:40:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:21:52.700 14:40:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:52.700 14:40:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:52.700 14:40:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:52.700 14:40:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:52.700 14:40:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:52.700 14:40:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:21:52.700 14:40:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=103704 00:21:52.700 14:40:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:21:52.700 14:40:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:21:52.700 [global] 00:21:52.700 thread=1 00:21:52.700 invalidate=1 00:21:52.700 rw=write 00:21:52.700 time_based=1 00:21:52.700 runtime=60 00:21:52.700 ioengine=libaio 00:21:52.700 direct=1 00:21:52.700 bs=4096 00:21:52.700 iodepth=1 00:21:52.700 norandommap=0 00:21:52.700 numjobs=1 00:21:52.700 00:21:52.700 verify_dump=1 00:21:52.700 verify_backlog=512 00:21:52.700 verify_state_save=0 00:21:52.700 do_verify=1 00:21:52.700 verify=crc32c-intel 00:21:52.700 [job0] 00:21:52.700 filename=/dev/nvme0n1 00:21:52.700 Could not set queue depth (nvme0n1) 00:21:52.700 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:52.700 fio-3.35 00:21:52.700 Starting 1 thread 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:55.232 true 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:55.232 true 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:55.232 true 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:55.232 true 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.232 14:40:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:58.602 true 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:58.602 true 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:58.602 true 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:58.602 true 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:21:58.602 14:40:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 103704 00:22:54.828 00:22:54.828 job0: (groupid=0, jobs=1): err= 0: pid=103730: Wed Jul 10 14:41:04 2024 00:22:54.828 read: IOPS=825, BW=3303KiB/s (3382kB/s)(194MiB/60000msec) 00:22:54.828 slat (nsec): min=13504, max=72288, avg=16867.83, stdev=3706.03 00:22:54.828 clat (usec): min=171, max=609, avg=194.96, stdev=15.21 00:22:54.828 lat (usec): min=185, max=634, avg=211.83, stdev=16.35 00:22:54.828 clat percentiles (usec): 00:22:54.828 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 186], 00:22:54.828 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 194], 00:22:54.828 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 221], 00:22:54.828 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 314], 99.95th=[ 338], 00:22:54.828 | 99.99th=[ 429] 00:22:54.828 write: IOPS=827, BW=3311KiB/s (3390kB/s)(194MiB/60000msec); 0 zone resets 00:22:54.828 slat (usec): min=20, max=13671, avg=24.97, stdev=70.65 00:22:54.828 clat (usec): min=55, max=40626k, avg=968.03, stdev=182299.77 00:22:54.828 lat (usec): min=150, max=40626k, avg=993.01, stdev=182299.78 00:22:54.828 clat percentiles (usec): 00:22:54.828 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 139], 20.00th=[ 143], 00:22:54.828 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:22:54.828 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 174], 00:22:54.828 | 99.00th=[ 202], 99.50th=[ 215], 99.90th=[ 258], 99.95th=[ 289], 00:22:54.828 | 99.99th=[ 668] 00:22:54.828 bw ( KiB/s): min= 1832, max=12288, per=100.00%, avg=9976.90, stdev=1958.58, samples=39 00:22:54.828 iops : min= 458, max= 3072, avg=2494.21, stdev=489.64, samples=39 00:22:54.828 lat (usec) : 100=0.01%, 250=99.23%, 500=0.76%, 750=0.01% 00:22:54.828 lat (msec) : >=2000=0.01% 00:22:54.828 cpu : usr=0.66%, sys=2.57%, ctx=99218, majf=0, minf=2 00:22:54.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:54.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.828 issued rwts: total=49542,49664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:54.828 00:22:54.828 Run status group 0 (all jobs): 00:22:54.828 READ: bw=3303KiB/s (3382kB/s), 3303KiB/s-3303KiB/s (3382kB/s-3382kB/s), io=194MiB (203MB), run=60000-60000msec 00:22:54.828 WRITE: bw=3311KiB/s (3390kB/s), 3311KiB/s-3311KiB/s (3390kB/s-3390kB/s), io=194MiB (203MB), run=60000-60000msec 00:22:54.828 00:22:54.828 Disk stats (read/write): 00:22:54.828 nvme0n1: ios=49461/49554, merge=0/0, ticks=9967/8020, in_queue=17987, util=99.89% 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:54.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:54.828 nvmf hotplug test: fio successful as expected 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:54.828 rmmod nvme_tcp 00:22:54.828 rmmod nvme_fabrics 00:22:54.828 rmmod nvme_keyring 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 103640 ']' 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 103640 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 103640 ']' 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 103640 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 103640 00:22:54.828 killing process with pid 103640 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 103640' 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 103640 00:22:54.828 14:41:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 103640 00:22:54.828 14:41:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:54.828 14:41:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:54.828 14:41:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:54.828 14:41:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:54.828 14:41:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:54.828 14:41:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.828 14:41:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.828 14:41:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.828 14:41:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:54.828 00:22:54.828 real 1m3.734s 00:22:54.828 user 4m1.886s 00:22:54.828 sys 0m9.898s 00:22:54.828 14:41:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:54.828 14:41:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:54.828 ************************************ 00:22:54.828 END TEST nvmf_initiator_timeout 00:22:54.828 ************************************ 00:22:54.828 14:41:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:54.828 14:41:05 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:22:54.828 14:41:05 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:22:54.828 14:41:05 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:54.828 14:41:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:54.828 14:41:05 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:22:54.828 14:41:05 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:54.828 14:41:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:54.828 14:41:05 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:22:54.828 14:41:05 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:54.828 14:41:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:54.828 14:41:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:54.828 14:41:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:54.828 ************************************ 00:22:54.828 START TEST nvmf_multicontroller 00:22:54.828 ************************************ 00:22:54.828 14:41:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:54.828 * Looking for test storage... 00:22:54.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:54.829 Cannot find device "nvmf_tgt_br" 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:54.829 Cannot find device "nvmf_tgt_br2" 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:54.829 Cannot find device "nvmf_tgt_br" 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:54.829 Cannot find device "nvmf_tgt_br2" 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:54.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:54.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:54.829 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:54.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:54.830 00:22:54.830 --- 10.0.0.2 ping statistics --- 00:22:54.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.830 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:54.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:54.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:22:54.830 00:22:54.830 --- 10.0.0.3 ping statistics --- 00:22:54.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.830 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:54.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:54.830 00:22:54.830 --- 10.0.0.1 ping statistics --- 00:22:54.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.830 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=104546 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 104546 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 104546 ']' 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.830 14:41:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.830 [2024-07-10 14:41:05.677564] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:22:54.830 [2024-07-10 14:41:05.677662] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.830 [2024-07-10 14:41:05.797457] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:54.830 [2024-07-10 14:41:05.812157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:54.830 [2024-07-10 14:41:05.857071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.830 [2024-07-10 14:41:05.857447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.830 [2024-07-10 14:41:05.857678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.830 [2024-07-10 14:41:05.857919] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.830 [2024-07-10 14:41:05.858081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.830 [2024-07-10 14:41:05.858449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.830 [2024-07-10 14:41:05.858530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.830 [2024-07-10 14:41:05.858540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.830 [2024-07-10 14:41:06.701678] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.830 Malloc0 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.830 [2024-07-10 14:41:06.773077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.830 [2024-07-10 14:41:06.780912] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.830 Malloc1 00:22:54.830 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=104597 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 104597 /var/tmp/bdevperf.sock 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 104597 ']' 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.831 14:41:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.090 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.090 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:55.090 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:55.090 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.090 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.090 NVMe0n1 00:22:55.090 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.090 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.090 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:55.090 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.090 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.090 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.090 1 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.091 2024/07/10 14:41:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:22:55.091 request: 00:22:55.091 { 00:22:55.091 "method": "bdev_nvme_attach_controller", 00:22:55.091 "params": { 00:22:55.091 "name": "NVMe0", 00:22:55.091 "trtype": "tcp", 00:22:55.091 "traddr": "10.0.0.2", 00:22:55.091 "adrfam": "ipv4", 00:22:55.091 "trsvcid": "4420", 00:22:55.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.091 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:55.091 "hostaddr": "10.0.0.2", 00:22:55.091 "hostsvcid": "60000", 00:22:55.091 "prchk_reftag": false, 00:22:55.091 "prchk_guard": false, 00:22:55.091 "hdgst": false, 00:22:55.091 "ddgst": false 00:22:55.091 } 00:22:55.091 } 00:22:55.091 Got JSON-RPC error response 00:22:55.091 GoRPCClient: error on JSON-RPC call 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.091 2024/07/10 14:41:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:22:55.091 request: 00:22:55.091 { 00:22:55.091 "method": "bdev_nvme_attach_controller", 00:22:55.091 "params": { 00:22:55.091 "name": "NVMe0", 00:22:55.091 "trtype": "tcp", 00:22:55.091 "traddr": "10.0.0.2", 00:22:55.091 "adrfam": "ipv4", 00:22:55.091 "trsvcid": "4420", 00:22:55.091 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:55.091 "hostaddr": "10.0.0.2", 00:22:55.091 "hostsvcid": "60000", 00:22:55.091 "prchk_reftag": false, 00:22:55.091 "prchk_guard": false, 00:22:55.091 "hdgst": false, 00:22:55.091 "ddgst": false 00:22:55.091 } 00:22:55.091 } 00:22:55.091 Got JSON-RPC error response 00:22:55.091 GoRPCClient: error on JSON-RPC call 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.091 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.091 2024/07/10 14:41:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:22:55.091 request: 00:22:55.091 { 00:22:55.091 "method": "bdev_nvme_attach_controller", 00:22:55.091 "params": { 00:22:55.091 "name": "NVMe0", 00:22:55.091 "trtype": "tcp", 00:22:55.091 "traddr": "10.0.0.2", 00:22:55.091 "adrfam": "ipv4", 00:22:55.091 "trsvcid": "4420", 00:22:55.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.091 "hostaddr": "10.0.0.2", 00:22:55.091 "hostsvcid": "60000", 00:22:55.091 "prchk_reftag": false, 00:22:55.091 "prchk_guard": false, 00:22:55.091 "hdgst": false, 00:22:55.091 "ddgst": false, 00:22:55.091 "multipath": "disable" 00:22:55.091 } 00:22:55.092 } 00:22:55.092 Got JSON-RPC error response 00:22:55.092 GoRPCClient: error on JSON-RPC call 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.092 2024/07/10 14:41:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:22:55.092 request: 00:22:55.092 { 00:22:55.092 "method": "bdev_nvme_attach_controller", 00:22:55.092 "params": { 00:22:55.092 "name": "NVMe0", 00:22:55.092 "trtype": "tcp", 00:22:55.092 "traddr": "10.0.0.2", 00:22:55.092 "adrfam": "ipv4", 00:22:55.092 "trsvcid": "4420", 00:22:55.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.092 "hostaddr": "10.0.0.2", 00:22:55.092 "hostsvcid": "60000", 00:22:55.092 "prchk_reftag": false, 00:22:55.092 "prchk_guard": false, 00:22:55.092 "hdgst": false, 00:22:55.092 "ddgst": false, 00:22:55.092 "multipath": "failover" 00:22:55.092 } 00:22:55.092 } 00:22:55.092 Got JSON-RPC error response 00:22:55.092 GoRPCClient: error on JSON-RPC call 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.092 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.092 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.350 00:22:55.350 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.350 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.350 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.350 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.350 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:55.350 14:41:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.350 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:55.350 14:41:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:56.727 0 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 104597 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 104597 ']' 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 104597 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104597 00:22:56.727 killing process with pid 104597 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104597' 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 104597 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 104597 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:56.727 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:22:56.727 [2024-07-10 14:41:06.876475] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:22:56.727 [2024-07-10 14:41:06.876586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104597 ] 00:22:56.727 [2024-07-10 14:41:06.994759] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:56.727 [2024-07-10 14:41:07.009346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.727 [2024-07-10 14:41:07.045676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.727 [2024-07-10 14:41:07.430548] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 2e07977e-3fe6-42ca-a061-e2cb8ff09dd3 already exists 00:22:56.727 [2024-07-10 14:41:07.430615] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:2e07977e-3fe6-42ca-a061-e2cb8ff09dd3 alias for bdev NVMe1n1 00:22:56.727 [2024-07-10 14:41:07.430633] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:56.727 Running I/O for 1 seconds... 00:22:56.727 00:22:56.727 Latency(us) 00:22:56.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.727 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:56.727 NVMe0n1 : 1.01 18992.18 74.19 0.00 0.00 6719.84 3619.37 16681.89 00:22:56.727 =================================================================================================================== 00:22:56.727 Total : 18992.18 74.19 0.00 0.00 6719.84 3619.37 16681.89 00:22:56.727 Received shutdown signal, test time was about 1.000000 seconds 00:22:56.727 00:22:56.727 Latency(us) 00:22:56.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.727 =================================================================================================================== 00:22:56.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.727 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.727 rmmod nvme_tcp 00:22:56.727 rmmod nvme_fabrics 00:22:56.727 rmmod nvme_keyring 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 104546 ']' 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 104546 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 104546 ']' 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 104546 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:56.727 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:56.728 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104546 00:22:56.728 killing process with pid 104546 00:22:56.728 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:56.728 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:56.728 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104546' 00:22:56.728 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 104546 00:22:56.728 14:41:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 104546 00:22:56.986 14:41:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:56.986 14:41:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:56.986 14:41:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:56.986 14:41:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.986 14:41:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:56.986 14:41:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.986 14:41:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.986 14:41:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.986 14:41:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:56.986 00:22:56.986 real 0m3.989s 00:22:56.986 user 0m11.964s 00:22:56.986 sys 0m0.865s 00:22:56.986 14:41:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:56.986 14:41:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.986 ************************************ 00:22:56.986 END TEST nvmf_multicontroller 00:22:56.986 ************************************ 00:22:56.986 14:41:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:56.986 14:41:09 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:56.986 14:41:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:56.986 14:41:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:56.986 14:41:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:56.986 ************************************ 00:22:56.986 START TEST nvmf_aer 00:22:56.986 ************************************ 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:56.987 * Looking for test storage... 00:22:56.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:56.987 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:57.246 Cannot find device "nvmf_tgt_br" 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:57.246 Cannot find device "nvmf_tgt_br2" 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:57.246 Cannot find device "nvmf_tgt_br" 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:57.246 Cannot find device "nvmf_tgt_br2" 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:57.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:57.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:57.246 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:57.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:22:57.504 00:22:57.504 --- 10.0.0.2 ping statistics --- 00:22:57.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.504 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:57.504 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:57.504 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:22:57.504 00:22:57.504 --- 10.0.0.3 ping statistics --- 00:22:57.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.504 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:57.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:57.504 00:22:57.504 --- 10.0.0.1 ping statistics --- 00:22:57.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.504 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=104828 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 104828 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 104828 ']' 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.504 14:41:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:57.504 [2024-07-10 14:41:09.655360] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:22:57.504 [2024-07-10 14:41:09.655446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.504 [2024-07-10 14:41:09.775147] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:57.763 [2024-07-10 14:41:09.796135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.763 [2024-07-10 14:41:09.839870] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.763 [2024-07-10 14:41:09.839923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.763 [2024-07-10 14:41:09.839936] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.763 [2024-07-10 14:41:09.839946] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.763 [2024-07-10 14:41:09.839955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.763 [2024-07-10 14:41:09.840112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.763 [2024-07-10 14:41:09.841095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.763 [2024-07-10 14:41:09.841187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.763 [2024-07-10 14:41:09.841197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.698 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:58.699 [2024-07-10 14:41:10.684586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:58.699 Malloc0 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:58.699 [2024-07-10 14:41:10.750403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:58.699 [ 00:22:58.699 { 00:22:58.699 "allow_any_host": true, 00:22:58.699 "hosts": [], 00:22:58.699 "listen_addresses": [], 00:22:58.699 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:58.699 "subtype": "Discovery" 00:22:58.699 }, 00:22:58.699 { 00:22:58.699 "allow_any_host": true, 00:22:58.699 "hosts": [], 00:22:58.699 "listen_addresses": [ 00:22:58.699 { 00:22:58.699 "adrfam": "IPv4", 00:22:58.699 "traddr": "10.0.0.2", 00:22:58.699 "trsvcid": "4420", 00:22:58.699 "trtype": "TCP" 00:22:58.699 } 00:22:58.699 ], 00:22:58.699 "max_cntlid": 65519, 00:22:58.699 "max_namespaces": 2, 00:22:58.699 "min_cntlid": 1, 00:22:58.699 "model_number": "SPDK bdev Controller", 00:22:58.699 "namespaces": [ 00:22:58.699 { 00:22:58.699 "bdev_name": "Malloc0", 00:22:58.699 "name": "Malloc0", 00:22:58.699 "nguid": "E8B1E19B92EE4DB8821100D6BA557259", 00:22:58.699 "nsid": 1, 00:22:58.699 "uuid": "e8b1e19b-92ee-4db8-8211-00d6ba557259" 00:22:58.699 } 00:22:58.699 ], 00:22:58.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.699 "serial_number": "SPDK00000000000001", 00:22:58.699 "subtype": "NVMe" 00:22:58.699 } 00:22:58.699 ] 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=104882 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.699 14:41:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:58.958 Malloc1 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:58.958 Asynchronous Event Request test 00:22:58.958 Attaching to 10.0.0.2 00:22:58.958 Attached to 10.0.0.2 00:22:58.958 Registering asynchronous event callbacks... 00:22:58.958 Starting namespace attribute notice tests for all controllers... 00:22:58.958 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:58.958 aer_cb - Changed Namespace 00:22:58.958 Cleaning up... 00:22:58.958 [ 00:22:58.958 { 00:22:58.958 "allow_any_host": true, 00:22:58.958 "hosts": [], 00:22:58.958 "listen_addresses": [], 00:22:58.958 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:58.958 "subtype": "Discovery" 00:22:58.958 }, 00:22:58.958 { 00:22:58.958 "allow_any_host": true, 00:22:58.958 "hosts": [], 00:22:58.958 "listen_addresses": [ 00:22:58.958 { 00:22:58.958 "adrfam": "IPv4", 00:22:58.958 "traddr": "10.0.0.2", 00:22:58.958 "trsvcid": "4420", 00:22:58.958 "trtype": "TCP" 00:22:58.958 } 00:22:58.958 ], 00:22:58.958 "max_cntlid": 65519, 00:22:58.958 "max_namespaces": 2, 00:22:58.958 "min_cntlid": 1, 00:22:58.958 "model_number": "SPDK bdev Controller", 00:22:58.958 "namespaces": [ 00:22:58.958 { 00:22:58.958 "bdev_name": "Malloc0", 00:22:58.958 "name": "Malloc0", 00:22:58.958 "nguid": "E8B1E19B92EE4DB8821100D6BA557259", 00:22:58.958 "nsid": 1, 00:22:58.958 "uuid": "e8b1e19b-92ee-4db8-8211-00d6ba557259" 00:22:58.958 }, 00:22:58.958 { 00:22:58.958 "bdev_name": "Malloc1", 00:22:58.958 "name": "Malloc1", 00:22:58.958 "nguid": "19876C5F9FD6499BBC35A4D690AB95DA", 00:22:58.958 "nsid": 2, 00:22:58.958 "uuid": "19876c5f-9fd6-499b-bc35-a4d690ab95da" 00:22:58.958 } 00:22:58.958 ], 00:22:58.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.958 "serial_number": "SPDK00000000000001", 00:22:58.958 "subtype": "NVMe" 00:22:58.958 } 00:22:58.958 ] 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 104882 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:58.958 rmmod nvme_tcp 00:22:58.958 rmmod nvme_fabrics 00:22:58.958 rmmod nvme_keyring 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 104828 ']' 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 104828 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 104828 ']' 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 104828 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104828 00:22:58.958 killing process with pid 104828 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104828' 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 104828 00:22:58.958 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 104828 00:22:59.239 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:59.239 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:59.239 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:59.239 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:59.239 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:59.239 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.239 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.239 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.239 14:41:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:59.239 00:22:59.239 real 0m2.248s 00:22:59.239 user 0m6.438s 00:22:59.239 sys 0m0.574s 00:22:59.239 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:59.239 ************************************ 00:22:59.239 14:41:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:59.239 END TEST nvmf_aer 00:22:59.239 ************************************ 00:22:59.239 14:41:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:59.239 14:41:11 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:59.239 14:41:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:59.239 14:41:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.239 14:41:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:59.239 ************************************ 00:22:59.239 START TEST nvmf_async_init 00:22:59.239 ************************************ 00:22:59.239 14:41:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:59.498 * Looking for test storage... 00:22:59.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=facb1e1b394d4e169b446ebae917a746 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:59.498 Cannot find device "nvmf_tgt_br" 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:59.498 Cannot find device "nvmf_tgt_br2" 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:59.498 Cannot find device "nvmf_tgt_br" 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:59.498 Cannot find device "nvmf_tgt_br2" 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:59.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:59.498 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:22:59.499 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:59.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:59.499 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:22:59.499 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:59.499 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:59.499 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:59.499 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:59.757 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:59.757 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:59.757 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:59.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:22:59.758 00:22:59.758 --- 10.0.0.2 ping statistics --- 00:22:59.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.758 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:59.758 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:59.758 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:22:59.758 00:22:59.758 --- 10.0.0.3 ping statistics --- 00:22:59.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.758 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:59.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:22:59.758 00:22:59.758 --- 10.0.0.1 ping statistics --- 00:22:59.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.758 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:59.758 14:41:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:59.758 14:41:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:59.758 14:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:59.758 14:41:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:59.758 14:41:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.758 14:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=105048 00:22:59.758 14:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:59.758 14:41:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 105048 00:22:59.758 14:41:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 105048 ']' 00:22:59.758 14:41:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.758 14:41:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.758 14:41:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.758 14:41:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.758 14:41:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.016 [2024-07-10 14:41:12.063665] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:23:00.016 [2024-07-10 14:41:12.063757] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.016 [2024-07-10 14:41:12.183154] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:00.016 [2024-07-10 14:41:12.203465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.016 [2024-07-10 14:41:12.247464] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.016 [2024-07-10 14:41:12.247539] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.017 [2024-07-10 14:41:12.247561] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.017 [2024-07-10 14:41:12.247577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.017 [2024-07-10 14:41:12.247591] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.017 [2024-07-10 14:41:12.247644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.952 [2024-07-10 14:41:13.103357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.952 null0 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g facb1e1b394d4e169b446ebae917a746 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.952 [2024-07-10 14:41:13.143466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.952 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.210 nvme0n1 00:23:01.210 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.210 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:01.210 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.210 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.210 [ 00:23:01.210 { 00:23:01.211 "aliases": [ 00:23:01.211 "facb1e1b-394d-4e16-9b44-6ebae917a746" 00:23:01.211 ], 00:23:01.211 "assigned_rate_limits": { 00:23:01.211 "r_mbytes_per_sec": 0, 00:23:01.211 "rw_ios_per_sec": 0, 00:23:01.211 "rw_mbytes_per_sec": 0, 00:23:01.211 "w_mbytes_per_sec": 0 00:23:01.211 }, 00:23:01.211 "block_size": 512, 00:23:01.211 "claimed": false, 00:23:01.211 "driver_specific": { 00:23:01.211 "mp_policy": "active_passive", 00:23:01.211 "nvme": [ 00:23:01.211 { 00:23:01.211 "ctrlr_data": { 00:23:01.211 "ana_reporting": false, 00:23:01.211 "cntlid": 1, 00:23:01.211 "firmware_revision": "24.09", 00:23:01.211 "model_number": "SPDK bdev Controller", 00:23:01.211 "multi_ctrlr": true, 00:23:01.211 "oacs": { 00:23:01.211 "firmware": 0, 00:23:01.211 "format": 0, 00:23:01.211 "ns_manage": 0, 00:23:01.211 "security": 0 00:23:01.211 }, 00:23:01.211 "serial_number": "00000000000000000000", 00:23:01.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.211 "vendor_id": "0x8086" 00:23:01.211 }, 00:23:01.211 "ns_data": { 00:23:01.211 "can_share": true, 00:23:01.211 "id": 1 00:23:01.211 }, 00:23:01.211 "trid": { 00:23:01.211 "adrfam": "IPv4", 00:23:01.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.211 "traddr": "10.0.0.2", 00:23:01.211 "trsvcid": "4420", 00:23:01.211 "trtype": "TCP" 00:23:01.211 }, 00:23:01.211 "vs": { 00:23:01.211 "nvme_version": "1.3" 00:23:01.211 } 00:23:01.211 } 00:23:01.211 ] 00:23:01.211 }, 00:23:01.211 "memory_domains": [ 00:23:01.211 { 00:23:01.211 "dma_device_id": "system", 00:23:01.211 "dma_device_type": 1 00:23:01.211 } 00:23:01.211 ], 00:23:01.211 "name": "nvme0n1", 00:23:01.211 "num_blocks": 2097152, 00:23:01.211 "product_name": "NVMe disk", 00:23:01.211 "supported_io_types": { 00:23:01.211 "abort": true, 00:23:01.211 "compare": true, 00:23:01.211 "compare_and_write": true, 00:23:01.211 "copy": true, 00:23:01.211 "flush": true, 00:23:01.211 "get_zone_info": false, 00:23:01.211 "nvme_admin": true, 00:23:01.211 "nvme_io": true, 00:23:01.211 "nvme_io_md": false, 00:23:01.211 "nvme_iov_md": false, 00:23:01.211 "read": true, 00:23:01.211 "reset": true, 00:23:01.211 "seek_data": false, 00:23:01.211 "seek_hole": false, 00:23:01.211 "unmap": false, 00:23:01.211 "write": true, 00:23:01.211 "write_zeroes": true, 00:23:01.211 "zcopy": false, 00:23:01.211 "zone_append": false, 00:23:01.211 "zone_management": false 00:23:01.211 }, 00:23:01.211 "uuid": "facb1e1b-394d-4e16-9b44-6ebae917a746", 00:23:01.211 "zoned": false 00:23:01.211 } 00:23:01.211 ] 00:23:01.211 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.211 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:01.211 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.211 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.211 [2024-07-10 14:41:13.404475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.211 [2024-07-10 14:41:13.404598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdd570 (9): Bad file descriptor 00:23:01.470 [2024-07-10 14:41:13.546510] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.470 [ 00:23:01.470 { 00:23:01.470 "aliases": [ 00:23:01.470 "facb1e1b-394d-4e16-9b44-6ebae917a746" 00:23:01.470 ], 00:23:01.470 "assigned_rate_limits": { 00:23:01.470 "r_mbytes_per_sec": 0, 00:23:01.470 "rw_ios_per_sec": 0, 00:23:01.470 "rw_mbytes_per_sec": 0, 00:23:01.470 "w_mbytes_per_sec": 0 00:23:01.470 }, 00:23:01.470 "block_size": 512, 00:23:01.470 "claimed": false, 00:23:01.470 "driver_specific": { 00:23:01.470 "mp_policy": "active_passive", 00:23:01.470 "nvme": [ 00:23:01.470 { 00:23:01.470 "ctrlr_data": { 00:23:01.470 "ana_reporting": false, 00:23:01.470 "cntlid": 2, 00:23:01.470 "firmware_revision": "24.09", 00:23:01.470 "model_number": "SPDK bdev Controller", 00:23:01.470 "multi_ctrlr": true, 00:23:01.470 "oacs": { 00:23:01.470 "firmware": 0, 00:23:01.470 "format": 0, 00:23:01.470 "ns_manage": 0, 00:23:01.470 "security": 0 00:23:01.470 }, 00:23:01.470 "serial_number": "00000000000000000000", 00:23:01.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.470 "vendor_id": "0x8086" 00:23:01.470 }, 00:23:01.470 "ns_data": { 00:23:01.470 "can_share": true, 00:23:01.470 "id": 1 00:23:01.470 }, 00:23:01.470 "trid": { 00:23:01.470 "adrfam": "IPv4", 00:23:01.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.470 "traddr": "10.0.0.2", 00:23:01.470 "trsvcid": "4420", 00:23:01.470 "trtype": "TCP" 00:23:01.470 }, 00:23:01.470 "vs": { 00:23:01.470 "nvme_version": "1.3" 00:23:01.470 } 00:23:01.470 } 00:23:01.470 ] 00:23:01.470 }, 00:23:01.470 "memory_domains": [ 00:23:01.470 { 00:23:01.470 "dma_device_id": "system", 00:23:01.470 "dma_device_type": 1 00:23:01.470 } 00:23:01.470 ], 00:23:01.470 "name": "nvme0n1", 00:23:01.470 "num_blocks": 2097152, 00:23:01.470 "product_name": "NVMe disk", 00:23:01.470 "supported_io_types": { 00:23:01.470 "abort": true, 00:23:01.470 "compare": true, 00:23:01.470 "compare_and_write": true, 00:23:01.470 "copy": true, 00:23:01.470 "flush": true, 00:23:01.470 "get_zone_info": false, 00:23:01.470 "nvme_admin": true, 00:23:01.470 "nvme_io": true, 00:23:01.470 "nvme_io_md": false, 00:23:01.470 "nvme_iov_md": false, 00:23:01.470 "read": true, 00:23:01.470 "reset": true, 00:23:01.470 "seek_data": false, 00:23:01.470 "seek_hole": false, 00:23:01.470 "unmap": false, 00:23:01.470 "write": true, 00:23:01.470 "write_zeroes": true, 00:23:01.470 "zcopy": false, 00:23:01.470 "zone_append": false, 00:23:01.470 "zone_management": false 00:23:01.470 }, 00:23:01.470 "uuid": "facb1e1b-394d-4e16-9b44-6ebae917a746", 00:23:01.470 "zoned": false 00:23:01.470 } 00:23:01.470 ] 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.1nF4ISypqH 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.1nF4ISypqH 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.470 [2024-07-10 14:41:13.608670] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:01.470 [2024-07-10 14:41:13.608872] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1nF4ISypqH 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.470 [2024-07-10 14:41:13.616661] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1nF4ISypqH 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.470 [2024-07-10 14:41:13.624679] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:01.470 [2024-07-10 14:41:13.624779] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:01.470 nvme0n1 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.470 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.470 [ 00:23:01.470 { 00:23:01.470 "aliases": [ 00:23:01.470 "facb1e1b-394d-4e16-9b44-6ebae917a746" 00:23:01.470 ], 00:23:01.470 "assigned_rate_limits": { 00:23:01.470 "r_mbytes_per_sec": 0, 00:23:01.470 "rw_ios_per_sec": 0, 00:23:01.470 "rw_mbytes_per_sec": 0, 00:23:01.470 "w_mbytes_per_sec": 0 00:23:01.470 }, 00:23:01.470 "block_size": 512, 00:23:01.470 "claimed": false, 00:23:01.470 "driver_specific": { 00:23:01.470 "mp_policy": "active_passive", 00:23:01.470 "nvme": [ 00:23:01.470 { 00:23:01.470 "ctrlr_data": { 00:23:01.470 "ana_reporting": false, 00:23:01.470 "cntlid": 3, 00:23:01.470 "firmware_revision": "24.09", 00:23:01.470 "model_number": "SPDK bdev Controller", 00:23:01.470 "multi_ctrlr": true, 00:23:01.470 "oacs": { 00:23:01.470 "firmware": 0, 00:23:01.470 "format": 0, 00:23:01.470 "ns_manage": 0, 00:23:01.470 "security": 0 00:23:01.470 }, 00:23:01.470 "serial_number": "00000000000000000000", 00:23:01.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.470 "vendor_id": "0x8086" 00:23:01.470 }, 00:23:01.470 "ns_data": { 00:23:01.470 "can_share": true, 00:23:01.470 "id": 1 00:23:01.470 }, 00:23:01.470 "trid": { 00:23:01.470 "adrfam": "IPv4", 00:23:01.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.470 "traddr": "10.0.0.2", 00:23:01.470 "trsvcid": "4421", 00:23:01.470 "trtype": "TCP" 00:23:01.470 }, 00:23:01.470 "vs": { 00:23:01.471 "nvme_version": "1.3" 00:23:01.471 } 00:23:01.471 } 00:23:01.471 ] 00:23:01.471 }, 00:23:01.471 "memory_domains": [ 00:23:01.471 { 00:23:01.471 "dma_device_id": "system", 00:23:01.471 "dma_device_type": 1 00:23:01.471 } 00:23:01.471 ], 00:23:01.471 "name": "nvme0n1", 00:23:01.471 "num_blocks": 2097152, 00:23:01.471 "product_name": "NVMe disk", 00:23:01.471 "supported_io_types": { 00:23:01.471 "abort": true, 00:23:01.471 "compare": true, 00:23:01.471 "compare_and_write": true, 00:23:01.471 "copy": true, 00:23:01.471 "flush": true, 00:23:01.471 "get_zone_info": false, 00:23:01.471 "nvme_admin": true, 00:23:01.471 "nvme_io": true, 00:23:01.471 "nvme_io_md": false, 00:23:01.471 "nvme_iov_md": false, 00:23:01.471 "read": true, 00:23:01.471 "reset": true, 00:23:01.471 "seek_data": false, 00:23:01.471 "seek_hole": false, 00:23:01.471 "unmap": false, 00:23:01.471 "write": true, 00:23:01.471 "write_zeroes": true, 00:23:01.471 "zcopy": false, 00:23:01.471 "zone_append": false, 00:23:01.471 "zone_management": false 00:23:01.471 }, 00:23:01.471 "uuid": "facb1e1b-394d-4e16-9b44-6ebae917a746", 00:23:01.471 "zoned": false 00:23:01.471 } 00:23:01.471 ] 00:23:01.471 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.471 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.471 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.471 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.471 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.471 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.1nF4ISypqH 00:23:01.471 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:01.471 14:41:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:01.471 14:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:01.471 14:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:01.729 14:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:01.729 14:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:01.729 14:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:01.729 14:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:01.729 rmmod nvme_tcp 00:23:01.729 rmmod nvme_fabrics 00:23:01.729 rmmod nvme_keyring 00:23:01.729 14:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:01.729 14:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:01.729 14:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:01.729 14:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 105048 ']' 00:23:01.730 14:41:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 105048 00:23:01.730 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 105048 ']' 00:23:01.730 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 105048 00:23:01.730 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:01.730 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:01.730 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105048 00:23:01.730 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:01.730 killing process with pid 105048 00:23:01.730 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:01.730 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105048' 00:23:01.730 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 105048 00:23:01.730 [2024-07-10 14:41:13.886602] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:01.730 [2024-07-10 14:41:13.886645] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:01.730 14:41:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 105048 00:23:01.988 14:41:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:01.988 14:41:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:01.988 14:41:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:01.988 14:41:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:01.988 14:41:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:01.988 14:41:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.988 14:41:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:01.988 14:41:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.988 14:41:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:01.988 ************************************ 00:23:01.988 END TEST nvmf_async_init 00:23:01.988 00:23:01.988 real 0m2.590s 00:23:01.988 user 0m2.426s 00:23:01.988 sys 0m0.575s 00:23:01.988 14:41:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:01.988 14:41:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.988 ************************************ 00:23:01.988 14:41:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:01.988 14:41:14 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:01.988 14:41:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:01.988 14:41:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:01.988 14:41:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:01.988 ************************************ 00:23:01.988 START TEST dma 00:23:01.988 ************************************ 00:23:01.988 14:41:14 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:01.988 * Looking for test storage... 00:23:01.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:01.989 14:41:14 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:01.989 14:41:14 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.989 14:41:14 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.989 14:41:14 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.989 14:41:14 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.989 14:41:14 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.989 14:41:14 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.989 14:41:14 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:01.989 14:41:14 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:01.989 14:41:14 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:01.989 14:41:14 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:01.989 14:41:14 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:01.989 00:23:01.989 real 0m0.106s 00:23:01.989 user 0m0.054s 00:23:01.989 sys 0m0.059s 00:23:01.989 14:41:14 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:01.989 14:41:14 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:01.989 ************************************ 00:23:01.989 END TEST dma 00:23:01.989 ************************************ 00:23:01.989 14:41:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:01.989 14:41:14 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:01.989 14:41:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:01.989 14:41:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:01.989 14:41:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:02.248 ************************************ 00:23:02.248 START TEST nvmf_identify 00:23:02.248 ************************************ 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:02.248 * Looking for test storage... 00:23:02.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:02.248 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:02.249 Cannot find device "nvmf_tgt_br" 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:02.249 Cannot find device "nvmf_tgt_br2" 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:02.249 Cannot find device "nvmf_tgt_br" 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:02.249 Cannot find device "nvmf_tgt_br2" 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:02.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:02.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:02.249 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:02.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:23:02.507 00:23:02.507 --- 10.0.0.2 ping statistics --- 00:23:02.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.507 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:02.507 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:02.507 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:23:02.507 00:23:02.507 --- 10.0.0.3 ping statistics --- 00:23:02.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.507 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:02.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:23:02.507 00:23:02.507 --- 10.0.0.1 ping statistics --- 00:23:02.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.507 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:23:02.507 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.508 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:23:02.508 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:02.508 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.508 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:02.508 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:02.508 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.508 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:02.508 14:41:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:02.508 14:41:14 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:02.508 14:41:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:02.508 14:41:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:02.766 14:41:14 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=105316 00:23:02.766 14:41:14 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.767 14:41:14 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:02.767 14:41:14 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 105316 00:23:02.767 14:41:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 105316 ']' 00:23:02.767 14:41:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.767 14:41:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.767 14:41:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.767 14:41:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.767 14:41:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:02.767 [2024-07-10 14:41:14.849075] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:23:02.767 [2024-07-10 14:41:14.849174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.767 [2024-07-10 14:41:14.969437] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:02.767 [2024-07-10 14:41:14.989679] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.767 [2024-07-10 14:41:15.031731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.767 [2024-07-10 14:41:15.031799] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.767 [2024-07-10 14:41:15.031814] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.767 [2024-07-10 14:41:15.031824] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.767 [2024-07-10 14:41:15.031833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.767 [2024-07-10 14:41:15.031993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.767 [2024-07-10 14:41:15.032061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.767 [2024-07-10 14:41:15.032730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.767 [2024-07-10 14:41:15.032780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:03.025 [2024-07-10 14:41:15.123352] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:03.025 Malloc0 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.025 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.026 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.026 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:03.026 [2024-07-10 14:41:15.216187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.026 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.026 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:03.026 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.026 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:03.026 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.026 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:03.026 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.026 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:03.026 [ 00:23:03.026 { 00:23:03.026 "allow_any_host": true, 00:23:03.026 "hosts": [], 00:23:03.026 "listen_addresses": [ 00:23:03.026 { 00:23:03.026 "adrfam": "IPv4", 00:23:03.026 "traddr": "10.0.0.2", 00:23:03.026 "trsvcid": "4420", 00:23:03.026 "trtype": "TCP" 00:23:03.026 } 00:23:03.026 ], 00:23:03.026 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:03.026 "subtype": "Discovery" 00:23:03.026 }, 00:23:03.026 { 00:23:03.026 "allow_any_host": true, 00:23:03.026 "hosts": [], 00:23:03.026 "listen_addresses": [ 00:23:03.026 { 00:23:03.026 "adrfam": "IPv4", 00:23:03.026 "traddr": "10.0.0.2", 00:23:03.026 "trsvcid": "4420", 00:23:03.026 "trtype": "TCP" 00:23:03.026 } 00:23:03.026 ], 00:23:03.026 "max_cntlid": 65519, 00:23:03.026 "max_namespaces": 32, 00:23:03.026 "min_cntlid": 1, 00:23:03.026 "model_number": "SPDK bdev Controller", 00:23:03.026 "namespaces": [ 00:23:03.026 { 00:23:03.026 "bdev_name": "Malloc0", 00:23:03.026 "eui64": "ABCDEF0123456789", 00:23:03.026 "name": "Malloc0", 00:23:03.026 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:03.026 "nsid": 1, 00:23:03.026 "uuid": "03e75895-f8b7-438e-8c7f-ad5a045e6be4" 00:23:03.026 } 00:23:03.026 ], 00:23:03.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.026 "serial_number": "SPDK00000000000001", 00:23:03.026 "subtype": "NVMe" 00:23:03.026 } 00:23:03.026 ] 00:23:03.026 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.026 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:03.026 [2024-07-10 14:41:15.267094] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:23:03.026 [2024-07-10 14:41:15.267180] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105351 ] 00:23:03.288 [2024-07-10 14:41:15.396866] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:03.288 [2024-07-10 14:41:15.415831] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:03.288 [2024-07-10 14:41:15.415914] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:03.288 [2024-07-10 14:41:15.415921] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:03.288 [2024-07-10 14:41:15.415936] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:03.288 [2024-07-10 14:41:15.415944] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:03.288 [2024-07-10 14:41:15.416110] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:03.288 [2024-07-10 14:41:15.416162] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xae2d00 0 00:23:03.288 [2024-07-10 14:41:15.420310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:03.288 [2024-07-10 14:41:15.420338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:03.288 [2024-07-10 14:41:15.420345] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:03.288 [2024-07-10 14:41:15.420349] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:03.288 [2024-07-10 14:41:15.420396] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.288 [2024-07-10 14:41:15.420403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.288 [2024-07-10 14:41:15.420408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae2d00) 00:23:03.288 [2024-07-10 14:41:15.420424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:03.288 [2024-07-10 14:41:15.420456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29500, cid 0, qid 0 00:23:03.288 [2024-07-10 14:41:15.428302] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.288 [2024-07-10 14:41:15.428326] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.288 [2024-07-10 14:41:15.428340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.288 [2024-07-10 14:41:15.428345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29500) on tqpair=0xae2d00 00:23:03.288 [2024-07-10 14:41:15.428358] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:03.288 [2024-07-10 14:41:15.428367] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:03.288 [2024-07-10 14:41:15.428375] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:03.288 [2024-07-10 14:41:15.428395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.288 [2024-07-10 14:41:15.428401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.288 [2024-07-10 14:41:15.428406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae2d00) 00:23:03.288 [2024-07-10 14:41:15.428416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.288 [2024-07-10 14:41:15.428447] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29500, cid 0, qid 0 00:23:03.288 [2024-07-10 14:41:15.428533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.288 [2024-07-10 14:41:15.428541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.288 [2024-07-10 14:41:15.428545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.288 [2024-07-10 14:41:15.428550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29500) on tqpair=0xae2d00 00:23:03.288 [2024-07-10 14:41:15.428556] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:03.288 [2024-07-10 14:41:15.428565] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:03.288 [2024-07-10 14:41:15.428574] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.288 [2024-07-10 14:41:15.428578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.288 [2024-07-10 14:41:15.428582] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae2d00) 00:23:03.288 [2024-07-10 14:41:15.428591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.288 [2024-07-10 14:41:15.428612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29500, cid 0, qid 0 00:23:03.288 [2024-07-10 14:41:15.428670] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.288 [2024-07-10 14:41:15.428677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.288 [2024-07-10 14:41:15.428681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.288 [2024-07-10 14:41:15.428685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29500) on tqpair=0xae2d00 00:23:03.288 [2024-07-10 14:41:15.428692] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:03.288 [2024-07-10 14:41:15.428702] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:03.288 [2024-07-10 14:41:15.428711] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.288 [2024-07-10 14:41:15.428716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.288 [2024-07-10 14:41:15.428720] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae2d00) 00:23:03.288 [2024-07-10 14:41:15.428728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.288 [2024-07-10 14:41:15.428747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29500, cid 0, qid 0 00:23:03.288 [2024-07-10 14:41:15.428803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.289 [2024-07-10 14:41:15.428818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.289 [2024-07-10 14:41:15.428823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.428828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29500) on tqpair=0xae2d00 00:23:03.289 [2024-07-10 14:41:15.428835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:03.289 [2024-07-10 14:41:15.428846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.428851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.428856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae2d00) 00:23:03.289 [2024-07-10 14:41:15.428863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.289 [2024-07-10 14:41:15.428884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29500, cid 0, qid 0 00:23:03.289 [2024-07-10 14:41:15.428949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.289 [2024-07-10 14:41:15.428956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.289 [2024-07-10 14:41:15.428960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.428964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29500) on tqpair=0xae2d00 00:23:03.289 [2024-07-10 14:41:15.428970] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:03.289 [2024-07-10 14:41:15.428976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:03.289 [2024-07-10 14:41:15.428985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:03.289 [2024-07-10 14:41:15.429091] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:03.289 [2024-07-10 14:41:15.429099] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:03.289 [2024-07-10 14:41:15.429110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae2d00) 00:23:03.289 [2024-07-10 14:41:15.429127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.289 [2024-07-10 14:41:15.429147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29500, cid 0, qid 0 00:23:03.289 [2024-07-10 14:41:15.429206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.289 [2024-07-10 14:41:15.429213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.289 [2024-07-10 14:41:15.429218] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29500) on tqpair=0xae2d00 00:23:03.289 [2024-07-10 14:41:15.429228] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:03.289 [2024-07-10 14:41:15.429239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429244] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae2d00) 00:23:03.289 [2024-07-10 14:41:15.429256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.289 [2024-07-10 14:41:15.429275] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29500, cid 0, qid 0 00:23:03.289 [2024-07-10 14:41:15.429350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.289 [2024-07-10 14:41:15.429360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.289 [2024-07-10 14:41:15.429363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429368] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29500) on tqpair=0xae2d00 00:23:03.289 [2024-07-10 14:41:15.429374] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:03.289 [2024-07-10 14:41:15.429380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:03.289 [2024-07-10 14:41:15.429388] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:03.289 [2024-07-10 14:41:15.429399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:03.289 [2024-07-10 14:41:15.429410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429415] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae2d00) 00:23:03.289 [2024-07-10 14:41:15.429423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.289 [2024-07-10 14:41:15.429447] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29500, cid 0, qid 0 00:23:03.289 [2024-07-10 14:41:15.429547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:03.289 [2024-07-10 14:41:15.429556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:03.289 [2024-07-10 14:41:15.429560] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429565] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xae2d00): datao=0, datal=4096, cccid=0 00:23:03.289 [2024-07-10 14:41:15.429570] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb29500) on tqpair(0xae2d00): expected_datao=0, payload_size=4096 00:23:03.289 [2024-07-10 14:41:15.429575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429584] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429589] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.289 [2024-07-10 14:41:15.429605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.289 [2024-07-10 14:41:15.429608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29500) on tqpair=0xae2d00 00:23:03.289 [2024-07-10 14:41:15.429623] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:03.289 [2024-07-10 14:41:15.429629] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:03.289 [2024-07-10 14:41:15.429634] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:03.289 [2024-07-10 14:41:15.429640] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:03.289 [2024-07-10 14:41:15.429646] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:03.289 [2024-07-10 14:41:15.429651] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:03.289 [2024-07-10 14:41:15.429661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:03.289 [2024-07-10 14:41:15.429670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae2d00) 00:23:03.289 [2024-07-10 14:41:15.429687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:03.289 [2024-07-10 14:41:15.429710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29500, cid 0, qid 0 00:23:03.289 [2024-07-10 14:41:15.429777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.289 [2024-07-10 14:41:15.429784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.289 [2024-07-10 14:41:15.429788] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429792] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29500) on tqpair=0xae2d00 00:23:03.289 [2024-07-10 14:41:15.429801] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae2d00) 00:23:03.289 [2024-07-10 14:41:15.429817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.289 [2024-07-10 14:41:15.429825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429833] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xae2d00) 00:23:03.289 [2024-07-10 14:41:15.429839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.289 [2024-07-10 14:41:15.429846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xae2d00) 00:23:03.289 [2024-07-10 14:41:15.429860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.289 [2024-07-10 14:41:15.429867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.289 [2024-07-10 14:41:15.429881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.289 [2024-07-10 14:41:15.429887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:03.289 [2024-07-10 14:41:15.429901] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:03.289 [2024-07-10 14:41:15.429910] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.429914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xae2d00) 00:23:03.289 [2024-07-10 14:41:15.429922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.289 [2024-07-10 14:41:15.429944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29500, cid 0, qid 0 00:23:03.289 [2024-07-10 14:41:15.429951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29680, cid 1, qid 0 00:23:03.289 [2024-07-10 14:41:15.429956] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29800, cid 2, qid 0 00:23:03.289 [2024-07-10 14:41:15.429962] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.289 [2024-07-10 14:41:15.429967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29b00, cid 4, qid 0 00:23:03.289 [2024-07-10 14:41:15.430058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.289 [2024-07-10 14:41:15.430066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.289 [2024-07-10 14:41:15.430070] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.289 [2024-07-10 14:41:15.430074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29b00) on tqpair=0xae2d00 00:23:03.290 [2024-07-10 14:41:15.430080] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:03.290 [2024-07-10 14:41:15.430090] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:03.290 [2024-07-10 14:41:15.430102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xae2d00) 00:23:03.290 [2024-07-10 14:41:15.430115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.290 [2024-07-10 14:41:15.430135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29b00, cid 4, qid 0 00:23:03.290 [2024-07-10 14:41:15.430208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:03.290 [2024-07-10 14:41:15.430224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:03.290 [2024-07-10 14:41:15.430229] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430234] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xae2d00): datao=0, datal=4096, cccid=4 00:23:03.290 [2024-07-10 14:41:15.430239] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb29b00) on tqpair(0xae2d00): expected_datao=0, payload_size=4096 00:23:03.290 [2024-07-10 14:41:15.430244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430252] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430256] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.290 [2024-07-10 14:41:15.430272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.290 [2024-07-10 14:41:15.430276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29b00) on tqpair=0xae2d00 00:23:03.290 [2024-07-10 14:41:15.430310] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:03.290 [2024-07-10 14:41:15.430354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xae2d00) 00:23:03.290 [2024-07-10 14:41:15.430370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.290 [2024-07-10 14:41:15.430378] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xae2d00) 00:23:03.290 [2024-07-10 14:41:15.430393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.290 [2024-07-10 14:41:15.430420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29b00, cid 4, qid 0 00:23:03.290 [2024-07-10 14:41:15.430428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29c80, cid 5, qid 0 00:23:03.290 [2024-07-10 14:41:15.430535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:03.290 [2024-07-10 14:41:15.430542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:03.290 [2024-07-10 14:41:15.430546] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430550] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xae2d00): datao=0, datal=1024, cccid=4 00:23:03.290 [2024-07-10 14:41:15.430555] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb29b00) on tqpair(0xae2d00): expected_datao=0, payload_size=1024 00:23:03.290 [2024-07-10 14:41:15.430560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430568] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430572] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.290 [2024-07-10 14:41:15.430584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.290 [2024-07-10 14:41:15.430588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.430593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29c80) on tqpair=0xae2d00 00:23:03.290 [2024-07-10 14:41:15.471403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.290 [2024-07-10 14:41:15.471445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.290 [2024-07-10 14:41:15.471451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.471456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29b00) on tqpair=0xae2d00 00:23:03.290 [2024-07-10 14:41:15.471491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.471497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xae2d00) 00:23:03.290 [2024-07-10 14:41:15.471512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.290 [2024-07-10 14:41:15.471553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29b00, cid 4, qid 0 00:23:03.290 [2024-07-10 14:41:15.471673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:03.290 [2024-07-10 14:41:15.471680] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:03.290 [2024-07-10 14:41:15.471684] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.471689] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xae2d00): datao=0, datal=3072, cccid=4 00:23:03.290 [2024-07-10 14:41:15.471695] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb29b00) on tqpair(0xae2d00): expected_datao=0, payload_size=3072 00:23:03.290 [2024-07-10 14:41:15.471700] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.471709] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.471714] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.471723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.290 [2024-07-10 14:41:15.471729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.290 [2024-07-10 14:41:15.471733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.471738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29b00) on tqpair=0xae2d00 00:23:03.290 [2024-07-10 14:41:15.471749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.471754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xae2d00) 00:23:03.290 [2024-07-10 14:41:15.471762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.290 [2024-07-10 14:41:15.471789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29b00, cid 4, qid 0 00:23:03.290 [2024-07-10 14:41:15.471864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:03.290 [2024-07-10 14:41:15.471871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:03.290 [2024-07-10 14:41:15.471875] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.471879] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xae2d00): datao=0, datal=8, cccid=4 00:23:03.290 [2024-07-10 14:41:15.471884] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb29b00) on tqpair(0xae2d00): expected_datao=0, payload_size=8 00:23:03.290 [2024-07-10 14:41:15.471889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.471896] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.471900] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.514346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.290 [2024-07-10 14:41:15.514370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.290 [2024-07-10 14:41:15.514376] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.290 [2024-07-10 14:41:15.514381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29b00) on tqpair=0xae2d00 00:23:03.290 ===================================================== 00:23:03.290 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:03.290 ===================================================== 00:23:03.290 Controller Capabilities/Features 00:23:03.290 ================================ 00:23:03.290 Vendor ID: 0000 00:23:03.290 Subsystem Vendor ID: 0000 00:23:03.290 Serial Number: .................... 00:23:03.290 Model Number: ........................................ 00:23:03.290 Firmware Version: 24.09 00:23:03.290 Recommended Arb Burst: 0 00:23:03.290 IEEE OUI Identifier: 00 00 00 00:23:03.290 Multi-path I/O 00:23:03.290 May have multiple subsystem ports: No 00:23:03.290 May have multiple controllers: No 00:23:03.290 Associated with SR-IOV VF: No 00:23:03.290 Max Data Transfer Size: 131072 00:23:03.290 Max Number of Namespaces: 0 00:23:03.290 Max Number of I/O Queues: 1024 00:23:03.290 NVMe Specification Version (VS): 1.3 00:23:03.290 NVMe Specification Version (Identify): 1.3 00:23:03.290 Maximum Queue Entries: 128 00:23:03.290 Contiguous Queues Required: Yes 00:23:03.290 Arbitration Mechanisms Supported 00:23:03.290 Weighted Round Robin: Not Supported 00:23:03.290 Vendor Specific: Not Supported 00:23:03.290 Reset Timeout: 15000 ms 00:23:03.290 Doorbell Stride: 4 bytes 00:23:03.290 NVM Subsystem Reset: Not Supported 00:23:03.290 Command Sets Supported 00:23:03.290 NVM Command Set: Supported 00:23:03.290 Boot Partition: Not Supported 00:23:03.290 Memory Page Size Minimum: 4096 bytes 00:23:03.290 Memory Page Size Maximum: 4096 bytes 00:23:03.290 Persistent Memory Region: Not Supported 00:23:03.290 Optional Asynchronous Events Supported 00:23:03.290 Namespace Attribute Notices: Not Supported 00:23:03.290 Firmware Activation Notices: Not Supported 00:23:03.290 ANA Change Notices: Not Supported 00:23:03.290 PLE Aggregate Log Change Notices: Not Supported 00:23:03.290 LBA Status Info Alert Notices: Not Supported 00:23:03.290 EGE Aggregate Log Change Notices: Not Supported 00:23:03.290 Normal NVM Subsystem Shutdown event: Not Supported 00:23:03.290 Zone Descriptor Change Notices: Not Supported 00:23:03.290 Discovery Log Change Notices: Supported 00:23:03.290 Controller Attributes 00:23:03.290 128-bit Host Identifier: Not Supported 00:23:03.290 Non-Operational Permissive Mode: Not Supported 00:23:03.290 NVM Sets: Not Supported 00:23:03.290 Read Recovery Levels: Not Supported 00:23:03.290 Endurance Groups: Not Supported 00:23:03.290 Predictable Latency Mode: Not Supported 00:23:03.290 Traffic Based Keep ALive: Not Supported 00:23:03.290 Namespace Granularity: Not Supported 00:23:03.290 SQ Associations: Not Supported 00:23:03.290 UUID List: Not Supported 00:23:03.290 Multi-Domain Subsystem: Not Supported 00:23:03.290 Fixed Capacity Management: Not Supported 00:23:03.290 Variable Capacity Management: Not Supported 00:23:03.290 Delete Endurance Group: Not Supported 00:23:03.290 Delete NVM Set: Not Supported 00:23:03.290 Extended LBA Formats Supported: Not Supported 00:23:03.290 Flexible Data Placement Supported: Not Supported 00:23:03.290 00:23:03.290 Controller Memory Buffer Support 00:23:03.291 ================================ 00:23:03.291 Supported: No 00:23:03.291 00:23:03.291 Persistent Memory Region Support 00:23:03.291 ================================ 00:23:03.291 Supported: No 00:23:03.291 00:23:03.291 Admin Command Set Attributes 00:23:03.291 ============================ 00:23:03.291 Security Send/Receive: Not Supported 00:23:03.291 Format NVM: Not Supported 00:23:03.291 Firmware Activate/Download: Not Supported 00:23:03.291 Namespace Management: Not Supported 00:23:03.291 Device Self-Test: Not Supported 00:23:03.291 Directives: Not Supported 00:23:03.291 NVMe-MI: Not Supported 00:23:03.291 Virtualization Management: Not Supported 00:23:03.291 Doorbell Buffer Config: Not Supported 00:23:03.291 Get LBA Status Capability: Not Supported 00:23:03.291 Command & Feature Lockdown Capability: Not Supported 00:23:03.291 Abort Command Limit: 1 00:23:03.291 Async Event Request Limit: 4 00:23:03.291 Number of Firmware Slots: N/A 00:23:03.291 Firmware Slot 1 Read-Only: N/A 00:23:03.291 Firmware Activation Without Reset: N/A 00:23:03.291 Multiple Update Detection Support: N/A 00:23:03.291 Firmware Update Granularity: No Information Provided 00:23:03.291 Per-Namespace SMART Log: No 00:23:03.291 Asymmetric Namespace Access Log Page: Not Supported 00:23:03.291 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:03.291 Command Effects Log Page: Not Supported 00:23:03.291 Get Log Page Extended Data: Supported 00:23:03.291 Telemetry Log Pages: Not Supported 00:23:03.291 Persistent Event Log Pages: Not Supported 00:23:03.291 Supported Log Pages Log Page: May Support 00:23:03.291 Commands Supported & Effects Log Page: Not Supported 00:23:03.291 Feature Identifiers & Effects Log Page:May Support 00:23:03.291 NVMe-MI Commands & Effects Log Page: May Support 00:23:03.291 Data Area 4 for Telemetry Log: Not Supported 00:23:03.291 Error Log Page Entries Supported: 128 00:23:03.291 Keep Alive: Not Supported 00:23:03.291 00:23:03.291 NVM Command Set Attributes 00:23:03.291 ========================== 00:23:03.291 Submission Queue Entry Size 00:23:03.291 Max: 1 00:23:03.291 Min: 1 00:23:03.291 Completion Queue Entry Size 00:23:03.291 Max: 1 00:23:03.291 Min: 1 00:23:03.291 Number of Namespaces: 0 00:23:03.291 Compare Command: Not Supported 00:23:03.291 Write Uncorrectable Command: Not Supported 00:23:03.291 Dataset Management Command: Not Supported 00:23:03.291 Write Zeroes Command: Not Supported 00:23:03.291 Set Features Save Field: Not Supported 00:23:03.291 Reservations: Not Supported 00:23:03.291 Timestamp: Not Supported 00:23:03.291 Copy: Not Supported 00:23:03.291 Volatile Write Cache: Not Present 00:23:03.291 Atomic Write Unit (Normal): 1 00:23:03.291 Atomic Write Unit (PFail): 1 00:23:03.291 Atomic Compare & Write Unit: 1 00:23:03.291 Fused Compare & Write: Supported 00:23:03.291 Scatter-Gather List 00:23:03.291 SGL Command Set: Supported 00:23:03.291 SGL Keyed: Supported 00:23:03.291 SGL Bit Bucket Descriptor: Not Supported 00:23:03.291 SGL Metadata Pointer: Not Supported 00:23:03.291 Oversized SGL: Not Supported 00:23:03.291 SGL Metadata Address: Not Supported 00:23:03.291 SGL Offset: Supported 00:23:03.291 Transport SGL Data Block: Not Supported 00:23:03.291 Replay Protected Memory Block: Not Supported 00:23:03.291 00:23:03.291 Firmware Slot Information 00:23:03.291 ========================= 00:23:03.291 Active slot: 0 00:23:03.291 00:23:03.291 00:23:03.291 Error Log 00:23:03.291 ========= 00:23:03.291 00:23:03.291 Active Namespaces 00:23:03.291 ================= 00:23:03.291 Discovery Log Page 00:23:03.291 ================== 00:23:03.291 Generation Counter: 2 00:23:03.291 Number of Records: 2 00:23:03.291 Record Format: 0 00:23:03.291 00:23:03.291 Discovery Log Entry 0 00:23:03.291 ---------------------- 00:23:03.291 Transport Type: 3 (TCP) 00:23:03.291 Address Family: 1 (IPv4) 00:23:03.291 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:03.291 Entry Flags: 00:23:03.291 Duplicate Returned Information: 1 00:23:03.291 Explicit Persistent Connection Support for Discovery: 1 00:23:03.291 Transport Requirements: 00:23:03.291 Secure Channel: Not Required 00:23:03.291 Port ID: 0 (0x0000) 00:23:03.291 Controller ID: 65535 (0xffff) 00:23:03.291 Admin Max SQ Size: 128 00:23:03.291 Transport Service Identifier: 4420 00:23:03.291 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:03.291 Transport Address: 10.0.0.2 00:23:03.291 Discovery Log Entry 1 00:23:03.291 ---------------------- 00:23:03.291 Transport Type: 3 (TCP) 00:23:03.291 Address Family: 1 (IPv4) 00:23:03.291 Subsystem Type: 2 (NVM Subsystem) 00:23:03.291 Entry Flags: 00:23:03.291 Duplicate Returned Information: 0 00:23:03.291 Explicit Persistent Connection Support for Discovery: 0 00:23:03.291 Transport Requirements: 00:23:03.291 Secure Channel: Not Required 00:23:03.291 Port ID: 0 (0x0000) 00:23:03.291 Controller ID: 65535 (0xffff) 00:23:03.291 Admin Max SQ Size: 128 00:23:03.291 Transport Service Identifier: 4420 00:23:03.291 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:03.291 Transport Address: 10.0.0.2 [2024-07-10 14:41:15.514516] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:03.291 [2024-07-10 14:41:15.514533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29500) on tqpair=0xae2d00 00:23:03.291 [2024-07-10 14:41:15.514542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.291 [2024-07-10 14:41:15.514548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29680) on tqpair=0xae2d00 00:23:03.291 [2024-07-10 14:41:15.514554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.291 [2024-07-10 14:41:15.514559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29800) on tqpair=0xae2d00 00:23:03.291 [2024-07-10 14:41:15.514564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.291 [2024-07-10 14:41:15.514570] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.291 [2024-07-10 14:41:15.514575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.291 [2024-07-10 14:41:15.514587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.291 [2024-07-10 14:41:15.514592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.291 [2024-07-10 14:41:15.514597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.291 [2024-07-10 14:41:15.514606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.291 [2024-07-10 14:41:15.514633] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.291 [2024-07-10 14:41:15.514698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.291 [2024-07-10 14:41:15.514705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.291 [2024-07-10 14:41:15.514709] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.291 [2024-07-10 14:41:15.514714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.291 [2024-07-10 14:41:15.514723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.291 [2024-07-10 14:41:15.514728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.291 [2024-07-10 14:41:15.514732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.291 [2024-07-10 14:41:15.514740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.291 [2024-07-10 14:41:15.514766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.291 [2024-07-10 14:41:15.514850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.291 [2024-07-10 14:41:15.514857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.291 [2024-07-10 14:41:15.514861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.291 [2024-07-10 14:41:15.514866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.291 [2024-07-10 14:41:15.514871] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:03.291 [2024-07-10 14:41:15.514877] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:03.291 [2024-07-10 14:41:15.514888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.291 [2024-07-10 14:41:15.514893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.291 [2024-07-10 14:41:15.514897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.291 [2024-07-10 14:41:15.514904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.291 [2024-07-10 14:41:15.514924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.291 [2024-07-10 14:41:15.514979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.291 [2024-07-10 14:41:15.514987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.291 [2024-07-10 14:41:15.514990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.291 [2024-07-10 14:41:15.514995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.291 [2024-07-10 14:41:15.515007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.291 [2024-07-10 14:41:15.515012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.291 [2024-07-10 14:41:15.515016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.291 [2024-07-10 14:41:15.515024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.291 [2024-07-10 14:41:15.515043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.291 [2024-07-10 14:41:15.515098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.291 [2024-07-10 14:41:15.515106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.291 [2024-07-10 14:41:15.515110] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.292 [2024-07-10 14:41:15.515126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.292 [2024-07-10 14:41:15.515143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.292 [2024-07-10 14:41:15.515161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.292 [2024-07-10 14:41:15.515217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.292 [2024-07-10 14:41:15.515224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.292 [2024-07-10 14:41:15.515228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.292 [2024-07-10 14:41:15.515244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.292 [2024-07-10 14:41:15.515260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.292 [2024-07-10 14:41:15.515279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.292 [2024-07-10 14:41:15.515358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.292 [2024-07-10 14:41:15.515366] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.292 [2024-07-10 14:41:15.515370] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515374] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.292 [2024-07-10 14:41:15.515386] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.292 [2024-07-10 14:41:15.515403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.292 [2024-07-10 14:41:15.515424] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.292 [2024-07-10 14:41:15.515483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.292 [2024-07-10 14:41:15.515491] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.292 [2024-07-10 14:41:15.515494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515499] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.292 [2024-07-10 14:41:15.515510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.292 [2024-07-10 14:41:15.515527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.292 [2024-07-10 14:41:15.515546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.292 [2024-07-10 14:41:15.515598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.292 [2024-07-10 14:41:15.515605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.292 [2024-07-10 14:41:15.515609] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.292 [2024-07-10 14:41:15.515625] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.292 [2024-07-10 14:41:15.515641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.292 [2024-07-10 14:41:15.515660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.292 [2024-07-10 14:41:15.515724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.292 [2024-07-10 14:41:15.515731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.292 [2024-07-10 14:41:15.515735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515739] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.292 [2024-07-10 14:41:15.515751] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515756] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.292 [2024-07-10 14:41:15.515767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.292 [2024-07-10 14:41:15.515786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.292 [2024-07-10 14:41:15.515851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.292 [2024-07-10 14:41:15.515863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.292 [2024-07-10 14:41:15.515867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.292 [2024-07-10 14:41:15.515884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515893] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.292 [2024-07-10 14:41:15.515901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.292 [2024-07-10 14:41:15.515921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.292 [2024-07-10 14:41:15.515980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.292 [2024-07-10 14:41:15.515988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.292 [2024-07-10 14:41:15.515992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.515996] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.292 [2024-07-10 14:41:15.516007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.516012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.516016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.292 [2024-07-10 14:41:15.516024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.292 [2024-07-10 14:41:15.516044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.292 [2024-07-10 14:41:15.516100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.292 [2024-07-10 14:41:15.516107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.292 [2024-07-10 14:41:15.516111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.516115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.292 [2024-07-10 14:41:15.516126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.516131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.516135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.292 [2024-07-10 14:41:15.516143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.292 [2024-07-10 14:41:15.516162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.292 [2024-07-10 14:41:15.516221] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.292 [2024-07-10 14:41:15.516228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.292 [2024-07-10 14:41:15.516232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.516237] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.292 [2024-07-10 14:41:15.516248] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.516253] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.292 [2024-07-10 14:41:15.516257] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.292 [2024-07-10 14:41:15.516264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.293 [2024-07-10 14:41:15.516293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.293 [2024-07-10 14:41:15.516354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.293 [2024-07-10 14:41:15.516361] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.293 [2024-07-10 14:41:15.516365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.293 [2024-07-10 14:41:15.516381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.293 [2024-07-10 14:41:15.516397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.293 [2024-07-10 14:41:15.516418] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.293 [2024-07-10 14:41:15.516475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.293 [2024-07-10 14:41:15.516482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.293 [2024-07-10 14:41:15.516486] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516491] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.293 [2024-07-10 14:41:15.516502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.293 [2024-07-10 14:41:15.516519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.293 [2024-07-10 14:41:15.516538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.293 [2024-07-10 14:41:15.516596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.293 [2024-07-10 14:41:15.516603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.293 [2024-07-10 14:41:15.516607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.293 [2024-07-10 14:41:15.516623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516632] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.293 [2024-07-10 14:41:15.516640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.293 [2024-07-10 14:41:15.516659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.293 [2024-07-10 14:41:15.516710] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.293 [2024-07-10 14:41:15.516717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.293 [2024-07-10 14:41:15.516721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.293 [2024-07-10 14:41:15.516736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.293 [2024-07-10 14:41:15.516753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.293 [2024-07-10 14:41:15.516772] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.293 [2024-07-10 14:41:15.516840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.293 [2024-07-10 14:41:15.516849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.293 [2024-07-10 14:41:15.516853] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.293 [2024-07-10 14:41:15.516869] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516878] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.293 [2024-07-10 14:41:15.516885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.293 [2024-07-10 14:41:15.516906] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.293 [2024-07-10 14:41:15.516963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.293 [2024-07-10 14:41:15.516970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.293 [2024-07-10 14:41:15.516974] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.516979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.293 [2024-07-10 14:41:15.516995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517004] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.293 [2024-07-10 14:41:15.517018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.293 [2024-07-10 14:41:15.517040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.293 [2024-07-10 14:41:15.517096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.293 [2024-07-10 14:41:15.517104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.293 [2024-07-10 14:41:15.517107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.293 [2024-07-10 14:41:15.517123] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517129] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.293 [2024-07-10 14:41:15.517140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.293 [2024-07-10 14:41:15.517159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.293 [2024-07-10 14:41:15.517215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.293 [2024-07-10 14:41:15.517222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.293 [2024-07-10 14:41:15.517227] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517231] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.293 [2024-07-10 14:41:15.517242] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.293 [2024-07-10 14:41:15.517259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.293 [2024-07-10 14:41:15.517278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.293 [2024-07-10 14:41:15.517359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.293 [2024-07-10 14:41:15.517367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.293 [2024-07-10 14:41:15.517371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.293 [2024-07-10 14:41:15.517387] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.293 [2024-07-10 14:41:15.517404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.293 [2024-07-10 14:41:15.517425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.293 [2024-07-10 14:41:15.517481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.293 [2024-07-10 14:41:15.517488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.293 [2024-07-10 14:41:15.517492] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517497] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.293 [2024-07-10 14:41:15.517508] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.293 [2024-07-10 14:41:15.517517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.293 [2024-07-10 14:41:15.517525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.293 [2024-07-10 14:41:15.517544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.294 [2024-07-10 14:41:15.517601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.294 [2024-07-10 14:41:15.517608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.294 [2024-07-10 14:41:15.517612] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.517616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.294 [2024-07-10 14:41:15.517628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.517633] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.517637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.294 [2024-07-10 14:41:15.517645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.294 [2024-07-10 14:41:15.517663] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.294 [2024-07-10 14:41:15.517719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.294 [2024-07-10 14:41:15.517726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.294 [2024-07-10 14:41:15.517730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.517734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.294 [2024-07-10 14:41:15.517745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.517751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.517755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.294 [2024-07-10 14:41:15.517762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.294 [2024-07-10 14:41:15.517781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.294 [2024-07-10 14:41:15.517837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.294 [2024-07-10 14:41:15.517845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.294 [2024-07-10 14:41:15.517849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.517853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.294 [2024-07-10 14:41:15.517864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.517869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.517873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.294 [2024-07-10 14:41:15.517881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.294 [2024-07-10 14:41:15.517900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.294 [2024-07-10 14:41:15.517952] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.294 [2024-07-10 14:41:15.517959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.294 [2024-07-10 14:41:15.517963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.517967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.294 [2024-07-10 14:41:15.517979] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.517984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.517988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.294 [2024-07-10 14:41:15.517996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.294 [2024-07-10 14:41:15.518014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.294 [2024-07-10 14:41:15.518070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.294 [2024-07-10 14:41:15.518077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.294 [2024-07-10 14:41:15.518081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.518085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.294 [2024-07-10 14:41:15.518097] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.518102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.518106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.294 [2024-07-10 14:41:15.518113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.294 [2024-07-10 14:41:15.518132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.294 [2024-07-10 14:41:15.518187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.294 [2024-07-10 14:41:15.518200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.294 [2024-07-10 14:41:15.518204] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.518209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.294 [2024-07-10 14:41:15.518220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.518226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.518230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.294 [2024-07-10 14:41:15.518238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.294 [2024-07-10 14:41:15.518257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.294 [2024-07-10 14:41:15.522303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.294 [2024-07-10 14:41:15.522326] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.294 [2024-07-10 14:41:15.522331] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.522336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.294 [2024-07-10 14:41:15.522351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.522357] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.522361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae2d00) 00:23:03.294 [2024-07-10 14:41:15.522370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.294 [2024-07-10 14:41:15.522398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb29980, cid 3, qid 0 00:23:03.294 [2024-07-10 14:41:15.522467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.294 [2024-07-10 14:41:15.522475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.294 [2024-07-10 14:41:15.522479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.294 [2024-07-10 14:41:15.522483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb29980) on tqpair=0xae2d00 00:23:03.294 [2024-07-10 14:41:15.522492] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:23:03.294 00:23:03.294 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:03.294 [2024-07-10 14:41:15.564257] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:23:03.294 [2024-07-10 14:41:15.564333] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105354 ] 00:23:03.562 [2024-07-10 14:41:15.690727] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:03.562 [2024-07-10 14:41:15.709924] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:03.562 [2024-07-10 14:41:15.709984] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:03.562 [2024-07-10 14:41:15.709992] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:03.562 [2024-07-10 14:41:15.710005] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:03.562 [2024-07-10 14:41:15.710013] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:03.562 [2024-07-10 14:41:15.710154] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:03.562 [2024-07-10 14:41:15.710213] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbdad00 0 00:23:03.562 [2024-07-10 14:41:15.722312] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:03.562 [2024-07-10 14:41:15.722336] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:03.562 [2024-07-10 14:41:15.722342] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:03.562 [2024-07-10 14:41:15.722347] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:03.562 [2024-07-10 14:41:15.722391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.722399] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.722403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbdad00) 00:23:03.562 [2024-07-10 14:41:15.722420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:03.562 [2024-07-10 14:41:15.722455] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21500, cid 0, qid 0 00:23:03.562 [2024-07-10 14:41:15.730308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.562 [2024-07-10 14:41:15.730330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.562 [2024-07-10 14:41:15.730335] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.730341] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21500) on tqpair=0xbdad00 00:23:03.562 [2024-07-10 14:41:15.730353] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:03.562 [2024-07-10 14:41:15.730362] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:03.562 [2024-07-10 14:41:15.730369] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:03.562 [2024-07-10 14:41:15.730389] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.730395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.730399] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbdad00) 00:23:03.562 [2024-07-10 14:41:15.730410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.562 [2024-07-10 14:41:15.730442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21500, cid 0, qid 0 00:23:03.562 [2024-07-10 14:41:15.730533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.562 [2024-07-10 14:41:15.730541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.562 [2024-07-10 14:41:15.730546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.730550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21500) on tqpair=0xbdad00 00:23:03.562 [2024-07-10 14:41:15.730557] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:03.562 [2024-07-10 14:41:15.730566] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:03.562 [2024-07-10 14:41:15.730574] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.730579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.730584] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbdad00) 00:23:03.562 [2024-07-10 14:41:15.730592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.562 [2024-07-10 14:41:15.730614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21500, cid 0, qid 0 00:23:03.562 [2024-07-10 14:41:15.730688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.562 [2024-07-10 14:41:15.730695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.562 [2024-07-10 14:41:15.730699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.730704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21500) on tqpair=0xbdad00 00:23:03.562 [2024-07-10 14:41:15.730711] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:03.562 [2024-07-10 14:41:15.730720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:03.562 [2024-07-10 14:41:15.730729] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.730734] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.730738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbdad00) 00:23:03.562 [2024-07-10 14:41:15.730746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.562 [2024-07-10 14:41:15.730766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21500, cid 0, qid 0 00:23:03.562 [2024-07-10 14:41:15.730833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.562 [2024-07-10 14:41:15.730840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.562 [2024-07-10 14:41:15.730844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.730849] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21500) on tqpair=0xbdad00 00:23:03.562 [2024-07-10 14:41:15.730855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:03.562 [2024-07-10 14:41:15.730866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.730871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.730876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbdad00) 00:23:03.562 [2024-07-10 14:41:15.730884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.562 [2024-07-10 14:41:15.730903] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21500, cid 0, qid 0 00:23:03.562 [2024-07-10 14:41:15.730968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.562 [2024-07-10 14:41:15.730980] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.562 [2024-07-10 14:41:15.730985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.562 [2024-07-10 14:41:15.730990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21500) on tqpair=0xbdad00 00:23:03.562 [2024-07-10 14:41:15.730996] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:03.562 [2024-07-10 14:41:15.731002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:03.562 [2024-07-10 14:41:15.731011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:03.562 [2024-07-10 14:41:15.731118] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:03.562 [2024-07-10 14:41:15.731126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:03.563 [2024-07-10 14:41:15.731137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731142] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbdad00) 00:23:03.563 [2024-07-10 14:41:15.731155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.563 [2024-07-10 14:41:15.731176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21500, cid 0, qid 0 00:23:03.563 [2024-07-10 14:41:15.731242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.563 [2024-07-10 14:41:15.731250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.563 [2024-07-10 14:41:15.731254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21500) on tqpair=0xbdad00 00:23:03.563 [2024-07-10 14:41:15.731264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:03.563 [2024-07-10 14:41:15.731276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbdad00) 00:23:03.563 [2024-07-10 14:41:15.731318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.563 [2024-07-10 14:41:15.731343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21500, cid 0, qid 0 00:23:03.563 [2024-07-10 14:41:15.731416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.563 [2024-07-10 14:41:15.731429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.563 [2024-07-10 14:41:15.731434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21500) on tqpair=0xbdad00 00:23:03.563 [2024-07-10 14:41:15.731444] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:03.563 [2024-07-10 14:41:15.731450] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:03.563 [2024-07-10 14:41:15.731459] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:03.563 [2024-07-10 14:41:15.731471] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:03.563 [2024-07-10 14:41:15.731483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbdad00) 00:23:03.563 [2024-07-10 14:41:15.731496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.563 [2024-07-10 14:41:15.731518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21500, cid 0, qid 0 00:23:03.563 [2024-07-10 14:41:15.731632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:03.563 [2024-07-10 14:41:15.731644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:03.563 [2024-07-10 14:41:15.731649] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731654] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbdad00): datao=0, datal=4096, cccid=0 00:23:03.563 [2024-07-10 14:41:15.731659] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc21500) on tqpair(0xbdad00): expected_datao=0, payload_size=4096 00:23:03.563 [2024-07-10 14:41:15.731665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731674] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731679] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731689] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.563 [2024-07-10 14:41:15.731696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.563 [2024-07-10 14:41:15.731699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21500) on tqpair=0xbdad00 00:23:03.563 [2024-07-10 14:41:15.731713] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:03.563 [2024-07-10 14:41:15.731719] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:03.563 [2024-07-10 14:41:15.731724] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:03.563 [2024-07-10 14:41:15.731730] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:03.563 [2024-07-10 14:41:15.731735] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:03.563 [2024-07-10 14:41:15.731741] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:03.563 [2024-07-10 14:41:15.731751] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:03.563 [2024-07-10 14:41:15.731760] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbdad00) 00:23:03.563 [2024-07-10 14:41:15.731778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:03.563 [2024-07-10 14:41:15.731800] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21500, cid 0, qid 0 00:23:03.563 [2024-07-10 14:41:15.731870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.563 [2024-07-10 14:41:15.731882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.563 [2024-07-10 14:41:15.731887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21500) on tqpair=0xbdad00 00:23:03.563 [2024-07-10 14:41:15.731900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbdad00) 00:23:03.563 [2024-07-10 14:41:15.731917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.563 [2024-07-10 14:41:15.731924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbdad00) 00:23:03.563 [2024-07-10 14:41:15.731939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.563 [2024-07-10 14:41:15.731946] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731950] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731955] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbdad00) 00:23:03.563 [2024-07-10 14:41:15.731961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.563 [2024-07-10 14:41:15.731968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.731977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.563 [2024-07-10 14:41:15.731983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.563 [2024-07-10 14:41:15.731989] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:03.563 [2024-07-10 14:41:15.732004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:03.563 [2024-07-10 14:41:15.732012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.732017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbdad00) 00:23:03.563 [2024-07-10 14:41:15.732025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.563 [2024-07-10 14:41:15.732048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21500, cid 0, qid 0 00:23:03.563 [2024-07-10 14:41:15.732056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21680, cid 1, qid 0 00:23:03.563 [2024-07-10 14:41:15.732061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21800, cid 2, qid 0 00:23:03.563 [2024-07-10 14:41:15.732067] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.563 [2024-07-10 14:41:15.732072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21b00, cid 4, qid 0 00:23:03.563 [2024-07-10 14:41:15.732190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.563 [2024-07-10 14:41:15.732198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.563 [2024-07-10 14:41:15.732202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.732206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21b00) on tqpair=0xbdad00 00:23:03.563 [2024-07-10 14:41:15.732213] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:03.563 [2024-07-10 14:41:15.732223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:03.563 [2024-07-10 14:41:15.732232] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:03.563 [2024-07-10 14:41:15.732240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:03.563 [2024-07-10 14:41:15.732247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.732252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.732257] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbdad00) 00:23:03.563 [2024-07-10 14:41:15.732265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:03.563 [2024-07-10 14:41:15.732307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21b00, cid 4, qid 0 00:23:03.563 [2024-07-10 14:41:15.732375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.563 [2024-07-10 14:41:15.732385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.563 [2024-07-10 14:41:15.732389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.732404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21b00) on tqpair=0xbdad00 00:23:03.563 [2024-07-10 14:41:15.732469] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:03.563 [2024-07-10 14:41:15.732481] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:03.563 [2024-07-10 14:41:15.732491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.563 [2024-07-10 14:41:15.732496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbdad00) 00:23:03.563 [2024-07-10 14:41:15.732504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.563 [2024-07-10 14:41:15.732528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21b00, cid 4, qid 0 00:23:03.563 [2024-07-10 14:41:15.732597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:03.563 [2024-07-10 14:41:15.732605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:03.564 [2024-07-10 14:41:15.732609] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.732613] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbdad00): datao=0, datal=4096, cccid=4 00:23:03.564 [2024-07-10 14:41:15.732618] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc21b00) on tqpair(0xbdad00): expected_datao=0, payload_size=4096 00:23:03.564 [2024-07-10 14:41:15.732623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.732632] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.732636] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.732645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.564 [2024-07-10 14:41:15.732652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.564 [2024-07-10 14:41:15.732656] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.732661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21b00) on tqpair=0xbdad00 00:23:03.564 [2024-07-10 14:41:15.732677] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:03.564 [2024-07-10 14:41:15.732688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:03.564 [2024-07-10 14:41:15.732699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:03.564 [2024-07-10 14:41:15.732708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.732713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbdad00) 00:23:03.564 [2024-07-10 14:41:15.732721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-07-10 14:41:15.732743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21b00, cid 4, qid 0 00:23:03.564 [2024-07-10 14:41:15.732837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:03.564 [2024-07-10 14:41:15.732846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:03.564 [2024-07-10 14:41:15.732851] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.732855] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbdad00): datao=0, datal=4096, cccid=4 00:23:03.564 [2024-07-10 14:41:15.732860] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc21b00) on tqpair(0xbdad00): expected_datao=0, payload_size=4096 00:23:03.564 [2024-07-10 14:41:15.732865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.732873] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.732877] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.732886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.564 [2024-07-10 14:41:15.732893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.564 [2024-07-10 14:41:15.732897] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.732901] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21b00) on tqpair=0xbdad00 00:23:03.564 [2024-07-10 14:41:15.732927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:03.564 [2024-07-10 14:41:15.732939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:03.564 [2024-07-10 14:41:15.732949] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.732953] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbdad00) 00:23:03.564 [2024-07-10 14:41:15.732962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-07-10 14:41:15.732984] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21b00, cid 4, qid 0 00:23:03.564 [2024-07-10 14:41:15.733052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:03.564 [2024-07-10 14:41:15.733065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:03.564 [2024-07-10 14:41:15.733070] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733075] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbdad00): datao=0, datal=4096, cccid=4 00:23:03.564 [2024-07-10 14:41:15.733080] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc21b00) on tqpair(0xbdad00): expected_datao=0, payload_size=4096 00:23:03.564 [2024-07-10 14:41:15.733085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733092] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733097] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.564 [2024-07-10 14:41:15.733113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.564 [2024-07-10 14:41:15.733117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733121] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21b00) on tqpair=0xbdad00 00:23:03.564 [2024-07-10 14:41:15.733131] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:03.564 [2024-07-10 14:41:15.733140] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:03.564 [2024-07-10 14:41:15.733152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:03.564 [2024-07-10 14:41:15.733159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:03.564 [2024-07-10 14:41:15.733165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:03.564 [2024-07-10 14:41:15.733171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:03.564 [2024-07-10 14:41:15.733177] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:03.564 [2024-07-10 14:41:15.733182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:03.564 [2024-07-10 14:41:15.733189] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:03.564 [2024-07-10 14:41:15.733207] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbdad00) 00:23:03.564 [2024-07-10 14:41:15.733220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-07-10 14:41:15.733228] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbdad00) 00:23:03.564 [2024-07-10 14:41:15.733244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.564 [2024-07-10 14:41:15.733271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21b00, cid 4, qid 0 00:23:03.564 [2024-07-10 14:41:15.733279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21c80, cid 5, qid 0 00:23:03.564 [2024-07-10 14:41:15.733368] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.564 [2024-07-10 14:41:15.733380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.564 [2024-07-10 14:41:15.733385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21b00) on tqpair=0xbdad00 00:23:03.564 [2024-07-10 14:41:15.733397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.564 [2024-07-10 14:41:15.733404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.564 [2024-07-10 14:41:15.733408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733412] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21c80) on tqpair=0xbdad00 00:23:03.564 [2024-07-10 14:41:15.733424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbdad00) 00:23:03.564 [2024-07-10 14:41:15.733437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-07-10 14:41:15.733459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21c80, cid 5, qid 0 00:23:03.564 [2024-07-10 14:41:15.733521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.564 [2024-07-10 14:41:15.733533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.564 [2024-07-10 14:41:15.733538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21c80) on tqpair=0xbdad00 00:23:03.564 [2024-07-10 14:41:15.733554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbdad00) 00:23:03.564 [2024-07-10 14:41:15.733567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-07-10 14:41:15.733587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21c80, cid 5, qid 0 00:23:03.564 [2024-07-10 14:41:15.733644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.564 [2024-07-10 14:41:15.733651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.564 [2024-07-10 14:41:15.733656] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21c80) on tqpair=0xbdad00 00:23:03.564 [2024-07-10 14:41:15.733671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbdad00) 00:23:03.564 [2024-07-10 14:41:15.733684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-07-10 14:41:15.733703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21c80, cid 5, qid 0 00:23:03.564 [2024-07-10 14:41:15.733760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.564 [2024-07-10 14:41:15.733767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.564 [2024-07-10 14:41:15.733772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21c80) on tqpair=0xbdad00 00:23:03.564 [2024-07-10 14:41:15.733795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbdad00) 00:23:03.564 [2024-07-10 14:41:15.733809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-07-10 14:41:15.733818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.564 [2024-07-10 14:41:15.733822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbdad00) 00:23:03.565 [2024-07-10 14:41:15.733829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-07-10 14:41:15.733837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.733842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xbdad00) 00:23:03.565 [2024-07-10 14:41:15.733849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-07-10 14:41:15.733860] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.733865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xbdad00) 00:23:03.565 [2024-07-10 14:41:15.733872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-07-10 14:41:15.733894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21c80, cid 5, qid 0 00:23:03.565 [2024-07-10 14:41:15.733902] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21b00, cid 4, qid 0 00:23:03.565 [2024-07-10 14:41:15.733908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21e00, cid 6, qid 0 00:23:03.565 [2024-07-10 14:41:15.733913] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21f80, cid 7, qid 0 00:23:03.565 [2024-07-10 14:41:15.734061] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:03.565 [2024-07-10 14:41:15.734073] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:03.565 [2024-07-10 14:41:15.734078] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734082] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbdad00): datao=0, datal=8192, cccid=5 00:23:03.565 [2024-07-10 14:41:15.734088] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc21c80) on tqpair(0xbdad00): expected_datao=0, payload_size=8192 00:23:03.565 [2024-07-10 14:41:15.734093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734111] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734116] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:03.565 [2024-07-10 14:41:15.734129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:03.565 [2024-07-10 14:41:15.734134] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734138] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbdad00): datao=0, datal=512, cccid=4 00:23:03.565 [2024-07-10 14:41:15.734143] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc21b00) on tqpair(0xbdad00): expected_datao=0, payload_size=512 00:23:03.565 [2024-07-10 14:41:15.734148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734155] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734159] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:03.565 [2024-07-10 14:41:15.734172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:03.565 [2024-07-10 14:41:15.734176] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734180] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbdad00): datao=0, datal=512, cccid=6 00:23:03.565 [2024-07-10 14:41:15.734185] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc21e00) on tqpair(0xbdad00): expected_datao=0, payload_size=512 00:23:03.565 [2024-07-10 14:41:15.734190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734197] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734202] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:03.565 [2024-07-10 14:41:15.734214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:03.565 [2024-07-10 14:41:15.734218] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734222] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbdad00): datao=0, datal=4096, cccid=7 00:23:03.565 [2024-07-10 14:41:15.734228] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc21f80) on tqpair(0xbdad00): expected_datao=0, payload_size=4096 00:23:03.565 [2024-07-10 14:41:15.734232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734240] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734244] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.565 [2024-07-10 14:41:15.734260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.565 [2024-07-10 14:41:15.734264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.734268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21c80) on tqpair=0xbdad00 00:23:03.565 [2024-07-10 14:41:15.738307] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.565 [2024-07-10 14:41:15.738329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.565 [2024-07-10 14:41:15.738334] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.738339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21b00) on tqpair=0xbdad00 00:23:03.565 [2024-07-10 14:41:15.738354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.565 [2024-07-10 14:41:15.738361] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.565 [2024-07-10 14:41:15.738365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.738369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21e00) on tqpair=0xbdad00 00:23:03.565 [2024-07-10 14:41:15.738377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.565 [2024-07-10 14:41:15.738384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.565 [2024-07-10 14:41:15.738388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.565 [2024-07-10 14:41:15.738392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21f80) on tqpair=0xbdad00 00:23:03.565 ===================================================== 00:23:03.565 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.565 ===================================================== 00:23:03.565 Controller Capabilities/Features 00:23:03.565 ================================ 00:23:03.565 Vendor ID: 8086 00:23:03.565 Subsystem Vendor ID: 8086 00:23:03.565 Serial Number: SPDK00000000000001 00:23:03.565 Model Number: SPDK bdev Controller 00:23:03.565 Firmware Version: 24.09 00:23:03.565 Recommended Arb Burst: 6 00:23:03.565 IEEE OUI Identifier: e4 d2 5c 00:23:03.565 Multi-path I/O 00:23:03.565 May have multiple subsystem ports: Yes 00:23:03.565 May have multiple controllers: Yes 00:23:03.565 Associated with SR-IOV VF: No 00:23:03.565 Max Data Transfer Size: 131072 00:23:03.565 Max Number of Namespaces: 32 00:23:03.565 Max Number of I/O Queues: 127 00:23:03.565 NVMe Specification Version (VS): 1.3 00:23:03.565 NVMe Specification Version (Identify): 1.3 00:23:03.565 Maximum Queue Entries: 128 00:23:03.565 Contiguous Queues Required: Yes 00:23:03.565 Arbitration Mechanisms Supported 00:23:03.565 Weighted Round Robin: Not Supported 00:23:03.565 Vendor Specific: Not Supported 00:23:03.565 Reset Timeout: 15000 ms 00:23:03.565 Doorbell Stride: 4 bytes 00:23:03.565 NVM Subsystem Reset: Not Supported 00:23:03.565 Command Sets Supported 00:23:03.565 NVM Command Set: Supported 00:23:03.565 Boot Partition: Not Supported 00:23:03.565 Memory Page Size Minimum: 4096 bytes 00:23:03.565 Memory Page Size Maximum: 4096 bytes 00:23:03.565 Persistent Memory Region: Not Supported 00:23:03.565 Optional Asynchronous Events Supported 00:23:03.565 Namespace Attribute Notices: Supported 00:23:03.565 Firmware Activation Notices: Not Supported 00:23:03.565 ANA Change Notices: Not Supported 00:23:03.565 PLE Aggregate Log Change Notices: Not Supported 00:23:03.565 LBA Status Info Alert Notices: Not Supported 00:23:03.565 EGE Aggregate Log Change Notices: Not Supported 00:23:03.565 Normal NVM Subsystem Shutdown event: Not Supported 00:23:03.565 Zone Descriptor Change Notices: Not Supported 00:23:03.565 Discovery Log Change Notices: Not Supported 00:23:03.565 Controller Attributes 00:23:03.565 128-bit Host Identifier: Supported 00:23:03.565 Non-Operational Permissive Mode: Not Supported 00:23:03.565 NVM Sets: Not Supported 00:23:03.565 Read Recovery Levels: Not Supported 00:23:03.565 Endurance Groups: Not Supported 00:23:03.565 Predictable Latency Mode: Not Supported 00:23:03.565 Traffic Based Keep ALive: Not Supported 00:23:03.565 Namespace Granularity: Not Supported 00:23:03.565 SQ Associations: Not Supported 00:23:03.565 UUID List: Not Supported 00:23:03.565 Multi-Domain Subsystem: Not Supported 00:23:03.565 Fixed Capacity Management: Not Supported 00:23:03.565 Variable Capacity Management: Not Supported 00:23:03.565 Delete Endurance Group: Not Supported 00:23:03.565 Delete NVM Set: Not Supported 00:23:03.565 Extended LBA Formats Supported: Not Supported 00:23:03.565 Flexible Data Placement Supported: Not Supported 00:23:03.565 00:23:03.565 Controller Memory Buffer Support 00:23:03.565 ================================ 00:23:03.565 Supported: No 00:23:03.565 00:23:03.565 Persistent Memory Region Support 00:23:03.565 ================================ 00:23:03.565 Supported: No 00:23:03.565 00:23:03.565 Admin Command Set Attributes 00:23:03.565 ============================ 00:23:03.565 Security Send/Receive: Not Supported 00:23:03.565 Format NVM: Not Supported 00:23:03.565 Firmware Activate/Download: Not Supported 00:23:03.565 Namespace Management: Not Supported 00:23:03.565 Device Self-Test: Not Supported 00:23:03.565 Directives: Not Supported 00:23:03.565 NVMe-MI: Not Supported 00:23:03.565 Virtualization Management: Not Supported 00:23:03.565 Doorbell Buffer Config: Not Supported 00:23:03.565 Get LBA Status Capability: Not Supported 00:23:03.565 Command & Feature Lockdown Capability: Not Supported 00:23:03.565 Abort Command Limit: 4 00:23:03.565 Async Event Request Limit: 4 00:23:03.565 Number of Firmware Slots: N/A 00:23:03.565 Firmware Slot 1 Read-Only: N/A 00:23:03.565 Firmware Activation Without Reset: N/A 00:23:03.565 Multiple Update Detection Support: N/A 00:23:03.565 Firmware Update Granularity: No Information Provided 00:23:03.565 Per-Namespace SMART Log: No 00:23:03.565 Asymmetric Namespace Access Log Page: Not Supported 00:23:03.565 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:03.565 Command Effects Log Page: Supported 00:23:03.565 Get Log Page Extended Data: Supported 00:23:03.565 Telemetry Log Pages: Not Supported 00:23:03.565 Persistent Event Log Pages: Not Supported 00:23:03.565 Supported Log Pages Log Page: May Support 00:23:03.565 Commands Supported & Effects Log Page: Not Supported 00:23:03.565 Feature Identifiers & Effects Log Page:May Support 00:23:03.565 NVMe-MI Commands & Effects Log Page: May Support 00:23:03.565 Data Area 4 for Telemetry Log: Not Supported 00:23:03.565 Error Log Page Entries Supported: 128 00:23:03.565 Keep Alive: Supported 00:23:03.565 Keep Alive Granularity: 10000 ms 00:23:03.565 00:23:03.565 NVM Command Set Attributes 00:23:03.565 ========================== 00:23:03.565 Submission Queue Entry Size 00:23:03.565 Max: 64 00:23:03.565 Min: 64 00:23:03.565 Completion Queue Entry Size 00:23:03.565 Max: 16 00:23:03.565 Min: 16 00:23:03.565 Number of Namespaces: 32 00:23:03.565 Compare Command: Supported 00:23:03.565 Write Uncorrectable Command: Not Supported 00:23:03.565 Dataset Management Command: Supported 00:23:03.565 Write Zeroes Command: Supported 00:23:03.565 Set Features Save Field: Not Supported 00:23:03.565 Reservations: Supported 00:23:03.565 Timestamp: Not Supported 00:23:03.566 Copy: Supported 00:23:03.566 Volatile Write Cache: Present 00:23:03.566 Atomic Write Unit (Normal): 1 00:23:03.566 Atomic Write Unit (PFail): 1 00:23:03.566 Atomic Compare & Write Unit: 1 00:23:03.566 Fused Compare & Write: Supported 00:23:03.566 Scatter-Gather List 00:23:03.566 SGL Command Set: Supported 00:23:03.566 SGL Keyed: Supported 00:23:03.566 SGL Bit Bucket Descriptor: Not Supported 00:23:03.566 SGL Metadata Pointer: Not Supported 00:23:03.566 Oversized SGL: Not Supported 00:23:03.566 SGL Metadata Address: Not Supported 00:23:03.566 SGL Offset: Supported 00:23:03.566 Transport SGL Data Block: Not Supported 00:23:03.566 Replay Protected Memory Block: Not Supported 00:23:03.566 00:23:03.566 Firmware Slot Information 00:23:03.566 ========================= 00:23:03.566 Active slot: 1 00:23:03.566 Slot 1 Firmware Revision: 24.09 00:23:03.566 00:23:03.566 00:23:03.566 Commands Supported and Effects 00:23:03.566 ============================== 00:23:03.566 Admin Commands 00:23:03.566 -------------- 00:23:03.566 Get Log Page (02h): Supported 00:23:03.566 Identify (06h): Supported 00:23:03.566 Abort (08h): Supported 00:23:03.566 Set Features (09h): Supported 00:23:03.566 Get Features (0Ah): Supported 00:23:03.566 Asynchronous Event Request (0Ch): Supported 00:23:03.566 Keep Alive (18h): Supported 00:23:03.566 I/O Commands 00:23:03.566 ------------ 00:23:03.566 Flush (00h): Supported LBA-Change 00:23:03.566 Write (01h): Supported LBA-Change 00:23:03.566 Read (02h): Supported 00:23:03.566 Compare (05h): Supported 00:23:03.566 Write Zeroes (08h): Supported LBA-Change 00:23:03.566 Dataset Management (09h): Supported LBA-Change 00:23:03.566 Copy (19h): Supported LBA-Change 00:23:03.566 00:23:03.566 Error Log 00:23:03.566 ========= 00:23:03.566 00:23:03.566 Arbitration 00:23:03.566 =========== 00:23:03.566 Arbitration Burst: 1 00:23:03.566 00:23:03.566 Power Management 00:23:03.566 ================ 00:23:03.566 Number of Power States: 1 00:23:03.566 Current Power State: Power State #0 00:23:03.566 Power State #0: 00:23:03.566 Max Power: 0.00 W 00:23:03.566 Non-Operational State: Operational 00:23:03.566 Entry Latency: Not Reported 00:23:03.566 Exit Latency: Not Reported 00:23:03.566 Relative Read Throughput: 0 00:23:03.566 Relative Read Latency: 0 00:23:03.566 Relative Write Throughput: 0 00:23:03.566 Relative Write Latency: 0 00:23:03.566 Idle Power: Not Reported 00:23:03.566 Active Power: Not Reported 00:23:03.566 Non-Operational Permissive Mode: Not Supported 00:23:03.566 00:23:03.566 Health Information 00:23:03.566 ================== 00:23:03.566 Critical Warnings: 00:23:03.566 Available Spare Space: OK 00:23:03.566 Temperature: OK 00:23:03.566 Device Reliability: OK 00:23:03.566 Read Only: No 00:23:03.566 Volatile Memory Backup: OK 00:23:03.566 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:03.566 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:03.566 Available Spare: 0% 00:23:03.566 Available Spare Threshold: 0% 00:23:03.566 Life Percentage Used:[2024-07-10 14:41:15.738510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.566 [2024-07-10 14:41:15.738518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xbdad00) 00:23:03.566 [2024-07-10 14:41:15.738529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.566 [2024-07-10 14:41:15.738561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21f80, cid 7, qid 0 00:23:03.566 [2024-07-10 14:41:15.738656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.566 [2024-07-10 14:41:15.738670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.566 [2024-07-10 14:41:15.738674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.566 [2024-07-10 14:41:15.738679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21f80) on tqpair=0xbdad00 00:23:03.566 [2024-07-10 14:41:15.738735] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:03.566 [2024-07-10 14:41:15.738752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21500) on tqpair=0xbdad00 00:23:03.566 [2024-07-10 14:41:15.738761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-07-10 14:41:15.738768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21680) on tqpair=0xbdad00 00:23:03.566 [2024-07-10 14:41:15.738773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-07-10 14:41:15.738779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21800) on tqpair=0xbdad00 00:23:03.566 [2024-07-10 14:41:15.738785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-07-10 14:41:15.738791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.567 [2024-07-10 14:41:15.738796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-07-10 14:41:15.738807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.738812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.738817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.567 [2024-07-10 14:41:15.738826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.567 [2024-07-10 14:41:15.738851] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.567 [2024-07-10 14:41:15.738912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.567 [2024-07-10 14:41:15.738922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.567 [2024-07-10 14:41:15.738927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.738932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.567 [2024-07-10 14:41:15.738941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.738946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.738950] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.567 [2024-07-10 14:41:15.738959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.567 [2024-07-10 14:41:15.738983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.567 [2024-07-10 14:41:15.739063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.567 [2024-07-10 14:41:15.739075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.567 [2024-07-10 14:41:15.739080] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.567 [2024-07-10 14:41:15.739090] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:03.567 [2024-07-10 14:41:15.739096] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:03.567 [2024-07-10 14:41:15.739108] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.567 [2024-07-10 14:41:15.739125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.567 [2024-07-10 14:41:15.739145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.567 [2024-07-10 14:41:15.739212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.567 [2024-07-10 14:41:15.739223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.567 [2024-07-10 14:41:15.739228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.567 [2024-07-10 14:41:15.739245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.567 [2024-07-10 14:41:15.739263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.567 [2024-07-10 14:41:15.739295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.567 [2024-07-10 14:41:15.739352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.567 [2024-07-10 14:41:15.739364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.567 [2024-07-10 14:41:15.739368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.567 [2024-07-10 14:41:15.739385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.567 [2024-07-10 14:41:15.739403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.567 [2024-07-10 14:41:15.739425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.567 [2024-07-10 14:41:15.739479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.567 [2024-07-10 14:41:15.739487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.567 [2024-07-10 14:41:15.739491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.567 [2024-07-10 14:41:15.739507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739512] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739516] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.567 [2024-07-10 14:41:15.739524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.567 [2024-07-10 14:41:15.739543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.567 [2024-07-10 14:41:15.739597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.567 [2024-07-10 14:41:15.739609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.567 [2024-07-10 14:41:15.739614] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739619] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.567 [2024-07-10 14:41:15.739630] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.567 [2024-07-10 14:41:15.739648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.567 [2024-07-10 14:41:15.739668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.567 [2024-07-10 14:41:15.739725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.567 [2024-07-10 14:41:15.739733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.567 [2024-07-10 14:41:15.739737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.567 [2024-07-10 14:41:15.739753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.567 [2024-07-10 14:41:15.739770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.567 [2024-07-10 14:41:15.739789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.567 [2024-07-10 14:41:15.739842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.567 [2024-07-10 14:41:15.739850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.567 [2024-07-10 14:41:15.739854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.567 [2024-07-10 14:41:15.739870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.567 [2024-07-10 14:41:15.739887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.567 [2024-07-10 14:41:15.739906] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.567 [2024-07-10 14:41:15.739966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.567 [2024-07-10 14:41:15.739973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.567 [2024-07-10 14:41:15.739978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739982] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.567 [2024-07-10 14:41:15.739993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.739998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.740003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.567 [2024-07-10 14:41:15.740010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.567 [2024-07-10 14:41:15.740030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.567 [2024-07-10 14:41:15.740089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.567 [2024-07-10 14:41:15.740097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.567 [2024-07-10 14:41:15.740102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.740107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.567 [2024-07-10 14:41:15.740119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.740124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.740128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.567 [2024-07-10 14:41:15.740136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.567 [2024-07-10 14:41:15.740155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.567 [2024-07-10 14:41:15.740215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.567 [2024-07-10 14:41:15.740222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.567 [2024-07-10 14:41:15.740227] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.740231] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.567 [2024-07-10 14:41:15.740242] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.740248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.567 [2024-07-10 14:41:15.740252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.567 [2024-07-10 14:41:15.740260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.740278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.740356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.740364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.740368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.740384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.740401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.740423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.740482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.740489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.740493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.740509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.740527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.740546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.740600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.740608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.740612] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.740629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.740647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.740666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.740721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.740728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.740732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.740748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740758] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.740766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.740784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.740853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.740862] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.740866] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.740882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.740900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.740920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.740979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.740986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.740990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.740995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.741006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741011] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741015] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.741023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.741043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.741102] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.741110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.741114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.741132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.741149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.741168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.741225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.741232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.741236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.741252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741258] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.741270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.741300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.741361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.741368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.741372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.741388] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.741406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.741426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.741482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.741489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.741493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.741509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.741527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.741546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.741601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.741608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.741612] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.741629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.741647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.741666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.741729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.741737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.741741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.741757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.741774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.741799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.741853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.741861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.741865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.741880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.741898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.741917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.741971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.741978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.741982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.741987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.741998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.742003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.742007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.742015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.742034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.742092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.742100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.742104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.742108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.742121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.742126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.742131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.742139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.742158] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.742214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.742222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.742226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.742230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.742241] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.742247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.742251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.742259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.742278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.746314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.746325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.746330] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.746335] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.746352] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.746358] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.746362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbdad00) 00:23:03.568 [2024-07-10 14:41:15.746372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.568 [2024-07-10 14:41:15.746400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc21980, cid 3, qid 0 00:23:03.568 [2024-07-10 14:41:15.746464] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:03.568 [2024-07-10 14:41:15.746472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:03.568 [2024-07-10 14:41:15.746476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:03.568 [2024-07-10 14:41:15.746481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc21980) on tqpair=0xbdad00 00:23:03.568 [2024-07-10 14:41:15.746490] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:03.568 0% 00:23:03.568 Data Units Read: 0 00:23:03.568 Data Units Written: 0 00:23:03.568 Host Read Commands: 0 00:23:03.568 Host Write Commands: 0 00:23:03.568 Controller Busy Time: 0 minutes 00:23:03.568 Power Cycles: 0 00:23:03.568 Power On Hours: 0 hours 00:23:03.568 Unsafe Shutdowns: 0 00:23:03.568 Unrecoverable Media Errors: 0 00:23:03.568 Lifetime Error Log Entries: 0 00:23:03.568 Warning Temperature Time: 0 minutes 00:23:03.568 Critical Temperature Time: 0 minutes 00:23:03.568 00:23:03.568 Number of Queues 00:23:03.568 ================ 00:23:03.568 Number of I/O Submission Queues: 127 00:23:03.568 Number of I/O Completion Queues: 127 00:23:03.568 00:23:03.569 Active Namespaces 00:23:03.569 ================= 00:23:03.569 Namespace ID:1 00:23:03.569 Error Recovery Timeout: Unlimited 00:23:03.569 Command Set Identifier: NVM (00h) 00:23:03.569 Deallocate: Supported 00:23:03.569 Deallocated/Unwritten Error: Not Supported 00:23:03.569 Deallocated Read Value: Unknown 00:23:03.569 Deallocate in Write Zeroes: Not Supported 00:23:03.569 Deallocated Guard Field: 0xFFFF 00:23:03.569 Flush: Supported 00:23:03.569 Reservation: Supported 00:23:03.569 Namespace Sharing Capabilities: Multiple Controllers 00:23:03.569 Size (in LBAs): 131072 (0GiB) 00:23:03.569 Capacity (in LBAs): 131072 (0GiB) 00:23:03.569 Utilization (in LBAs): 131072 (0GiB) 00:23:03.569 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:03.569 EUI64: ABCDEF0123456789 00:23:03.569 UUID: 03e75895-f8b7-438e-8c7f-ad5a045e6be4 00:23:03.569 Thin Provisioning: Not Supported 00:23:03.569 Per-NS Atomic Units: Yes 00:23:03.569 Atomic Boundary Size (Normal): 0 00:23:03.569 Atomic Boundary Size (PFail): 0 00:23:03.569 Atomic Boundary Offset: 0 00:23:03.569 Maximum Single Source Range Length: 65535 00:23:03.569 Maximum Copy Length: 65535 00:23:03.569 Maximum Source Range Count: 1 00:23:03.569 NGUID/EUI64 Never Reused: No 00:23:03.569 Namespace Write Protected: No 00:23:03.569 Number of LBA Formats: 1 00:23:03.569 Current LBA Format: LBA Format #00 00:23:03.569 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:03.569 00:23:03.569 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:03.569 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.569 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.569 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:03.569 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.569 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:03.569 14:41:15 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:03.569 14:41:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.569 14:41:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:03.569 14:41:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.569 14:41:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:03.569 14:41:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.569 14:41:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.569 rmmod nvme_tcp 00:23:03.569 rmmod nvme_fabrics 00:23:03.826 rmmod nvme_keyring 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 105316 ']' 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 105316 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 105316 ']' 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 105316 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105316 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:03.827 killing process with pid 105316 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105316' 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 105316 00:23:03.827 14:41:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 105316 00:23:03.827 14:41:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:03.827 14:41:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:03.827 14:41:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:03.827 14:41:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.827 14:41:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:03.827 14:41:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.827 14:41:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.827 14:41:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.827 14:41:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:03.827 00:23:03.827 real 0m1.806s 00:23:03.827 user 0m4.115s 00:23:03.827 sys 0m0.570s 00:23:03.827 14:41:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:03.827 14:41:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:03.827 ************************************ 00:23:03.827 END TEST nvmf_identify 00:23:03.827 ************************************ 00:23:04.085 14:41:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:04.085 14:41:16 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:04.085 14:41:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:04.085 14:41:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:04.085 14:41:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:04.085 ************************************ 00:23:04.085 START TEST nvmf_perf 00:23:04.085 ************************************ 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:04.085 * Looking for test storage... 00:23:04.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.085 14:41:16 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:04.086 Cannot find device "nvmf_tgt_br" 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:04.086 Cannot find device "nvmf_tgt_br2" 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:04.086 Cannot find device "nvmf_tgt_br" 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:04.086 Cannot find device "nvmf_tgt_br2" 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:04.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:04.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:04.086 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:04.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:23:04.344 00:23:04.344 --- 10.0.0.2 ping statistics --- 00:23:04.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.344 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:04.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:04.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:23:04.344 00:23:04.344 --- 10.0.0.3 ping statistics --- 00:23:04.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.344 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:04.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:04.344 00:23:04.344 --- 10.0.0.1 ping statistics --- 00:23:04.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.344 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=105519 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 105519 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:04.344 14:41:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 105519 ']' 00:23:04.602 14:41:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.602 14:41:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.602 14:41:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.602 14:41:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.602 14:41:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:04.602 [2024-07-10 14:41:16.693768] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:23:04.602 [2024-07-10 14:41:16.693873] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.602 [2024-07-10 14:41:16.819704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:04.602 [2024-07-10 14:41:16.835484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:04.602 [2024-07-10 14:41:16.871648] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.602 [2024-07-10 14:41:16.871701] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.602 [2024-07-10 14:41:16.871713] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.602 [2024-07-10 14:41:16.871722] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.602 [2024-07-10 14:41:16.871729] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.602 [2024-07-10 14:41:16.875322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.602 [2024-07-10 14:41:16.875446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.602 [2024-07-10 14:41:16.875506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:04.602 [2024-07-10 14:41:16.875512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.533 14:41:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.533 14:41:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:23:05.533 14:41:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.533 14:41:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:05.533 14:41:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:05.533 14:41:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.533 14:41:17 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:05.533 14:41:17 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:23:06.097 14:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:23:06.097 14:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:06.353 14:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:23:06.353 14:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:06.612 14:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:06.612 14:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:23:06.612 14:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:06.612 14:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:06.612 14:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:06.870 [2024-07-10 14:41:19.019754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.870 14:41:19 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:07.128 14:41:19 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:07.128 14:41:19 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:07.385 14:41:19 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:07.385 14:41:19 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:07.644 14:41:19 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.902 [2024-07-10 14:41:20.181091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.161 14:41:20 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:08.418 14:41:20 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:08.418 14:41:20 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:08.418 14:41:20 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:08.418 14:41:20 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:09.360 Initializing NVMe Controllers 00:23:09.360 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:09.360 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:09.360 Initialization complete. Launching workers. 00:23:09.360 ======================================================== 00:23:09.360 Latency(us) 00:23:09.360 Device Information : IOPS MiB/s Average min max 00:23:09.360 PCIE (0000:00:10.0) NSID 1 from core 0: 23838.00 93.12 1342.06 327.37 6425.28 00:23:09.360 ======================================================== 00:23:09.360 Total : 23838.00 93.12 1342.06 327.37 6425.28 00:23:09.360 00:23:09.360 14:41:21 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:10.751 Initializing NVMe Controllers 00:23:10.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:10.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:10.751 Initialization complete. Launching workers. 00:23:10.751 ======================================================== 00:23:10.751 Latency(us) 00:23:10.751 Device Information : IOPS MiB/s Average min max 00:23:10.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3334.92 13.03 299.50 118.80 4309.31 00:23:10.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.63 0.48 8152.53 5006.57 12013.86 00:23:10.751 ======================================================== 00:23:10.751 Total : 3458.54 13.51 580.20 118.80 12013.86 00:23:10.751 00:23:10.751 14:41:22 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:12.126 Initializing NVMe Controllers 00:23:12.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:12.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:12.126 Initialization complete. Launching workers. 00:23:12.126 ======================================================== 00:23:12.126 Latency(us) 00:23:12.126 Device Information : IOPS MiB/s Average min max 00:23:12.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8631.12 33.72 3707.33 686.78 7470.85 00:23:12.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2735.09 10.68 11813.02 6407.65 20189.97 00:23:12.126 ======================================================== 00:23:12.126 Total : 11366.20 44.40 5657.83 686.78 20189.97 00:23:12.126 00:23:12.126 14:41:24 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:23:12.126 14:41:24 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:14.660 Initializing NVMe Controllers 00:23:14.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.660 Controller IO queue size 128, less than required. 00:23:14.660 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.660 Controller IO queue size 128, less than required. 00:23:14.660 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:14.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:14.661 Initialization complete. Launching workers. 00:23:14.661 ======================================================== 00:23:14.661 Latency(us) 00:23:14.661 Device Information : IOPS MiB/s Average min max 00:23:14.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1299.48 324.87 105153.09 46367.86 667293.91 00:23:14.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 432.33 108.08 311848.59 103450.31 565258.92 00:23:14.661 ======================================================== 00:23:14.661 Total : 1731.80 432.95 156752.40 46367.86 667293.91 00:23:14.661 00:23:14.661 14:41:26 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:14.918 Initializing NVMe Controllers 00:23:14.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.918 Controller IO queue size 128, less than required. 00:23:14.918 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.918 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:14.918 Controller IO queue size 128, less than required. 00:23:14.918 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.918 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:23:14.918 WARNING: Some requested NVMe devices were skipped 00:23:14.918 No valid NVMe controllers or AIO or URING devices found 00:23:14.918 14:41:27 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:17.448 Initializing NVMe Controllers 00:23:17.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:17.448 Controller IO queue size 128, less than required. 00:23:17.448 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:17.448 Controller IO queue size 128, less than required. 00:23:17.448 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:17.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:17.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:17.448 Initialization complete. Launching workers. 00:23:17.448 00:23:17.448 ==================== 00:23:17.448 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:17.448 TCP transport: 00:23:17.449 polls: 7833 00:23:17.449 idle_polls: 4204 00:23:17.449 sock_completions: 3629 00:23:17.449 nvme_completions: 5181 00:23:17.449 submitted_requests: 7752 00:23:17.449 queued_requests: 1 00:23:17.449 00:23:17.449 ==================== 00:23:17.449 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:17.449 TCP transport: 00:23:17.449 polls: 7790 00:23:17.449 idle_polls: 4589 00:23:17.449 sock_completions: 3201 00:23:17.449 nvme_completions: 6357 00:23:17.449 submitted_requests: 9610 00:23:17.449 queued_requests: 1 00:23:17.449 ======================================================== 00:23:17.449 Latency(us) 00:23:17.449 Device Information : IOPS MiB/s Average min max 00:23:17.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1292.33 323.08 101881.98 67190.17 213068.06 00:23:17.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1585.72 396.43 80975.57 33635.57 150913.67 00:23:17.449 ======================================================== 00:23:17.449 Total : 2878.05 719.51 90363.16 33635.57 213068.06 00:23:17.449 00:23:17.449 14:41:29 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:17.449 14:41:29 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:17.706 14:41:29 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:23:17.706 14:41:29 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:23:17.706 14:41:29 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:23:18.272 14:41:30 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=65f27246-e3a6-4fa9-9674-64e7d6606222 00:23:18.272 14:41:30 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 65f27246-e3a6-4fa9-9674-64e7d6606222 00:23:18.272 14:41:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=65f27246-e3a6-4fa9-9674-64e7d6606222 00:23:18.272 14:41:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:23:18.272 14:41:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:23:18.272 14:41:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:23:18.272 14:41:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:18.530 14:41:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:23:18.530 { 00:23:18.530 "base_bdev": "Nvme0n1", 00:23:18.530 "block_size": 4096, 00:23:18.530 "cluster_size": 4194304, 00:23:18.530 "free_clusters": 1278, 00:23:18.530 "name": "lvs_0", 00:23:18.530 "total_data_clusters": 1278, 00:23:18.530 "uuid": "65f27246-e3a6-4fa9-9674-64e7d6606222" 00:23:18.530 } 00:23:18.530 ]' 00:23:18.530 14:41:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="65f27246-e3a6-4fa9-9674-64e7d6606222") .free_clusters' 00:23:18.530 14:41:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:23:18.530 14:41:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="65f27246-e3a6-4fa9-9674-64e7d6606222") .cluster_size' 00:23:18.530 5112 00:23:18.530 14:41:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:23:18.530 14:41:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:23:18.530 14:41:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:23:18.530 14:41:30 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:23:18.530 14:41:30 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 65f27246-e3a6-4fa9-9674-64e7d6606222 lbd_0 5112 00:23:18.788 14:41:30 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=a70823b3-8a9c-43a4-bdc9-c482929c92c3 00:23:18.788 14:41:30 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore a70823b3-8a9c-43a4-bdc9-c482929c92c3 lvs_n_0 00:23:19.353 14:41:31 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=5c891b7b-7a1b-41cd-b955-06ec61984e96 00:23:19.353 14:41:31 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 5c891b7b-7a1b-41cd-b955-06ec61984e96 00:23:19.353 14:41:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=5c891b7b-7a1b-41cd-b955-06ec61984e96 00:23:19.353 14:41:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:23:19.353 14:41:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:23:19.353 14:41:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:23:19.353 14:41:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:19.612 14:41:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:23:19.612 { 00:23:19.612 "base_bdev": "Nvme0n1", 00:23:19.612 "block_size": 4096, 00:23:19.612 "cluster_size": 4194304, 00:23:19.612 "free_clusters": 0, 00:23:19.612 "name": "lvs_0", 00:23:19.612 "total_data_clusters": 1278, 00:23:19.612 "uuid": "65f27246-e3a6-4fa9-9674-64e7d6606222" 00:23:19.612 }, 00:23:19.612 { 00:23:19.612 "base_bdev": "a70823b3-8a9c-43a4-bdc9-c482929c92c3", 00:23:19.612 "block_size": 4096, 00:23:19.612 "cluster_size": 4194304, 00:23:19.612 "free_clusters": 1276, 00:23:19.612 "name": "lvs_n_0", 00:23:19.612 "total_data_clusters": 1276, 00:23:19.612 "uuid": "5c891b7b-7a1b-41cd-b955-06ec61984e96" 00:23:19.612 } 00:23:19.612 ]' 00:23:19.612 14:41:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5c891b7b-7a1b-41cd-b955-06ec61984e96") .free_clusters' 00:23:19.612 14:41:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:23:19.612 14:41:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5c891b7b-7a1b-41cd-b955-06ec61984e96") .cluster_size' 00:23:19.612 5104 00:23:19.612 14:41:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:23:19.612 14:41:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:23:19.612 14:41:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:23:19.612 14:41:31 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:23:19.612 14:41:31 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c891b7b-7a1b-41cd-b955-06ec61984e96 lbd_nest_0 5104 00:23:19.904 14:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=5ed648aa-e0f6-4a47-90dc-0e1bdcc3cd67 00:23:19.904 14:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:20.163 14:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:23:20.163 14:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 5ed648aa-e0f6-4a47-90dc-0e1bdcc3cd67 00:23:20.421 14:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:20.679 14:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:23:20.679 14:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:23:20.679 14:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:20.679 14:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:20.679 14:41:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:20.949 Initializing NVMe Controllers 00:23:20.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:20.949 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:20.949 WARNING: Some requested NVMe devices were skipped 00:23:20.949 No valid NVMe controllers or AIO or URING devices found 00:23:21.210 14:41:33 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:21.210 14:41:33 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:31.288 Initializing NVMe Controllers 00:23:31.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:31.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:31.288 Initialization complete. Launching workers. 00:23:31.288 ======================================================== 00:23:31.288 Latency(us) 00:23:31.288 Device Information : IOPS MiB/s Average min max 00:23:31.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 994.89 124.36 1004.71 342.72 6634.54 00:23:31.288 ======================================================== 00:23:31.288 Total : 994.89 124.36 1004.71 342.72 6634.54 00:23:31.288 00:23:31.288 14:41:43 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:31.288 14:41:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:31.288 14:41:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:31.546 Initializing NVMe Controllers 00:23:31.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:31.546 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:31.546 WARNING: Some requested NVMe devices were skipped 00:23:31.546 No valid NVMe controllers or AIO or URING devices found 00:23:31.546 14:41:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:31.546 14:41:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:43.860 Initializing NVMe Controllers 00:23:43.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:43.860 Initialization complete. Launching workers. 00:23:43.860 ======================================================== 00:23:43.860 Latency(us) 00:23:43.860 Device Information : IOPS MiB/s Average min max 00:23:43.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1057.78 132.22 30795.49 7954.42 279878.43 00:23:43.860 ======================================================== 00:23:43.860 Total : 1057.78 132.22 30795.49 7954.42 279878.43 00:23:43.860 00:23:43.860 14:41:54 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:43.860 14:41:54 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:43.860 14:41:54 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:43.860 Initializing NVMe Controllers 00:23:43.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.860 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:43.860 WARNING: Some requested NVMe devices were skipped 00:23:43.860 No valid NVMe controllers or AIO or URING devices found 00:23:43.860 14:41:54 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:43.860 14:41:54 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:53.830 Initializing NVMe Controllers 00:23:53.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:53.830 Controller IO queue size 128, less than required. 00:23:53.830 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:53.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:53.830 Initialization complete. Launching workers. 00:23:53.830 ======================================================== 00:23:53.830 Latency(us) 00:23:53.830 Device Information : IOPS MiB/s Average min max 00:23:53.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3432.84 429.11 37305.85 10641.04 272917.89 00:23:53.830 ======================================================== 00:23:53.830 Total : 3432.84 429.11 37305.85 10641.04 272917.89 00:23:53.830 00:23:53.830 14:42:04 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:53.830 14:42:05 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5ed648aa-e0f6-4a47-90dc-0e1bdcc3cd67 00:23:53.830 14:42:05 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:23:53.830 14:42:05 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a70823b3-8a9c-43a4-bdc9-c482929c92c3 00:23:54.089 14:42:06 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.347 rmmod nvme_tcp 00:23:54.347 rmmod nvme_fabrics 00:23:54.347 rmmod nvme_keyring 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 105519 ']' 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 105519 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 105519 ']' 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 105519 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:54.347 14:42:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105519 00:23:54.606 killing process with pid 105519 00:23:54.606 14:42:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:54.606 14:42:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:54.606 14:42:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105519' 00:23:54.606 14:42:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 105519 00:23:54.606 14:42:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 105519 00:23:55.983 14:42:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:55.983 14:42:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:55.983 14:42:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:55.983 14:42:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:55.983 14:42:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:55.983 14:42:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.983 14:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.983 14:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.983 14:42:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:55.983 00:23:55.983 real 0m51.753s 00:23:55.983 user 3m12.164s 00:23:55.983 sys 0m11.183s 00:23:55.983 14:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:55.983 14:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:55.983 ************************************ 00:23:55.983 END TEST nvmf_perf 00:23:55.983 ************************************ 00:23:55.983 14:42:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:55.983 14:42:07 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:55.983 14:42:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:55.983 14:42:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:55.983 14:42:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:55.983 ************************************ 00:23:55.983 START TEST nvmf_fio_host 00:23:55.983 ************************************ 00:23:55.983 14:42:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:55.983 * Looking for test storage... 00:23:55.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.983 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:55.984 Cannot find device "nvmf_tgt_br" 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:55.984 Cannot find device "nvmf_tgt_br2" 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:55.984 Cannot find device "nvmf_tgt_br" 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:55.984 Cannot find device "nvmf_tgt_br2" 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:55.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:55.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:55.984 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:56.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:23:56.243 00:23:56.243 --- 10.0.0.2 ping statistics --- 00:23:56.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.243 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:56.243 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:56.243 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:23:56.243 00:23:56.243 --- 10.0.0.3 ping statistics --- 00:23:56.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.243 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:56.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:23:56.243 00:23:56.243 --- 10.0.0.1 ping statistics --- 00:23:56.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.243 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=106507 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:56.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 106507 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 106507 ']' 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.243 14:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.243 [2024-07-10 14:42:08.513838] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:23:56.243 [2024-07-10 14:42:08.513934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.502 [2024-07-10 14:42:08.637425] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:56.502 [2024-07-10 14:42:08.655228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:56.502 [2024-07-10 14:42:08.697551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.502 [2024-07-10 14:42:08.697811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.502 [2024-07-10 14:42:08.698019] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.502 [2024-07-10 14:42:08.698272] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.502 [2024-07-10 14:42:08.698433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.502 [2024-07-10 14:42:08.698643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.502 [2024-07-10 14:42:08.698693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.502 [2024-07-10 14:42:08.698909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.502 [2024-07-10 14:42:08.698916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.760 14:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.760 14:42:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:56.760 14:42:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:57.019 [2024-07-10 14:42:09.075342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.019 14:42:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:57.019 14:42:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.019 14:42:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.019 14:42:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:57.277 Malloc1 00:23:57.277 14:42:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.535 14:42:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:57.794 14:42:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.052 [2024-07-10 14:42:10.150418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.052 14:42:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:58.311 14:42:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:58.311 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:58.311 fio-3.35 00:23:58.311 Starting 1 thread 00:24:00.858 00:24:00.858 test: (groupid=0, jobs=1): err= 0: pid=106619: Wed Jul 10 14:42:12 2024 00:24:00.858 read: IOPS=8559, BW=33.4MiB/s (35.1MB/s)(67.1MiB/2007msec) 00:24:00.858 slat (usec): min=2, max=302, avg= 2.79, stdev= 3.10 00:24:00.858 clat (usec): min=2564, max=13771, avg=7863.99, stdev=1138.30 00:24:00.858 lat (usec): min=2606, max=13775, avg=7866.78, stdev=1138.43 00:24:00.858 clat percentiles (usec): 00:24:00.858 | 1.00th=[ 5997], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7177], 00:24:00.858 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7767], 00:24:00.858 | 70.00th=[ 7963], 80.00th=[ 8225], 90.00th=[ 9110], 95.00th=[10683], 00:24:00.858 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12911], 99.95th=[13173], 00:24:00.858 | 99.99th=[13566] 00:24:00.858 bw ( KiB/s): min=32592, max=35864, per=99.95%, avg=34222.00, stdev=1498.62, samples=4 00:24:00.858 iops : min= 8148, max= 8966, avg=8555.50, stdev=374.65, samples=4 00:24:00.858 write: IOPS=8557, BW=33.4MiB/s (35.0MB/s)(67.1MiB/2007msec); 0 zone resets 00:24:00.858 slat (usec): min=2, max=210, avg= 2.86, stdev= 2.21 00:24:00.858 clat (usec): min=1861, max=13147, avg=7036.13, stdev=900.58 00:24:00.858 lat (usec): min=1872, max=13149, avg=7038.99, stdev=900.55 00:24:00.858 clat percentiles (usec): 00:24:00.858 | 1.00th=[ 5145], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6521], 00:24:00.858 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:24:00.858 | 70.00th=[ 7177], 80.00th=[ 7373], 90.00th=[ 7832], 95.00th=[ 8848], 00:24:00.858 | 99.00th=[10552], 99.50th=[11076], 99.90th=[11863], 99.95th=[12518], 00:24:00.858 | 99.99th=[13173] 00:24:00.858 bw ( KiB/s): min=32088, max=35496, per=100.00%, avg=34230.00, stdev=1516.69, samples=4 00:24:00.858 iops : min= 8022, max= 8874, avg=8557.50, stdev=379.17, samples=4 00:24:00.858 lat (msec) : 2=0.01%, 4=0.16%, 10=95.19%, 20=4.64% 00:24:00.858 cpu : usr=65.10%, sys=24.63%, ctx=92, majf=0, minf=7 00:24:00.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:00.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:00.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:00.858 issued rwts: total=17179,17174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:00.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:00.858 00:24:00.858 Run status group 0 (all jobs): 00:24:00.858 READ: bw=33.4MiB/s (35.1MB/s), 33.4MiB/s-33.4MiB/s (35.1MB/s-35.1MB/s), io=67.1MiB (70.4MB), run=2007-2007msec 00:24:00.858 WRITE: bw=33.4MiB/s (35.0MB/s), 33.4MiB/s-33.4MiB/s (35.0MB/s-35.0MB/s), io=67.1MiB (70.3MB), run=2007-2007msec 00:24:00.858 14:42:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:00.858 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:00.858 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:00.858 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:00.858 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:00.858 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:00.858 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:00.858 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:00.858 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:00.858 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:00.858 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:00.859 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:00.859 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:00.859 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:00.859 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:00.859 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:00.859 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:00.859 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:00.859 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:00.859 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:00.859 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:00.859 14:42:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:00.859 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:00.859 fio-3.35 00:24:00.859 Starting 1 thread 00:24:03.389 00:24:03.389 test: (groupid=0, jobs=1): err= 0: pid=106668: Wed Jul 10 14:42:15 2024 00:24:03.389 read: IOPS=7933, BW=124MiB/s (130MB/s)(249MiB/2007msec) 00:24:03.389 slat (usec): min=3, max=123, avg= 3.94, stdev= 1.86 00:24:03.389 clat (usec): min=2218, max=18245, avg=9607.66, stdev=2400.07 00:24:03.389 lat (usec): min=2223, max=18249, avg=9611.60, stdev=2400.10 00:24:03.389 clat percentiles (usec): 00:24:03.389 | 1.00th=[ 4948], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 7570], 00:24:03.389 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10028], 00:24:03.389 | 70.00th=[10814], 80.00th=[11469], 90.00th=[12518], 95.00th=[13829], 00:24:03.389 | 99.00th=[16319], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:24:03.389 | 99.99th=[17957] 00:24:03.389 bw ( KiB/s): min=60672, max=69568, per=51.02%, avg=64760.00, stdev=4455.58, samples=4 00:24:03.389 iops : min= 3792, max= 4348, avg=4047.50, stdev=278.47, samples=4 00:24:03.389 write: IOPS=4537, BW=70.9MiB/s (74.3MB/s)(132MiB/1867msec); 0 zone resets 00:24:03.389 slat (usec): min=37, max=362, avg=39.69, stdev= 7.19 00:24:03.389 clat (usec): min=2835, max=20753, avg=11512.59, stdev=2030.59 00:24:03.389 lat (usec): min=2874, max=20791, avg=11552.29, stdev=2030.62 00:24:03.389 clat percentiles (usec): 00:24:03.389 | 1.00th=[ 7242], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10028], 00:24:03.389 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11731], 00:24:03.389 | 70.00th=[12256], 80.00th=[12911], 90.00th=[14222], 95.00th=[15139], 00:24:03.389 | 99.00th=[17957], 99.50th=[18744], 99.90th=[20317], 99.95th=[20579], 00:24:03.389 | 99.99th=[20841] 00:24:03.389 bw ( KiB/s): min=62176, max=72960, per=92.68%, avg=67280.00, stdev=5058.60, samples=4 00:24:03.389 iops : min= 3886, max= 4560, avg=4205.00, stdev=316.16, samples=4 00:24:03.389 lat (msec) : 4=0.18%, 10=45.46%, 20=54.26%, 50=0.10% 00:24:03.389 cpu : usr=72.58%, sys=17.75%, ctx=5, majf=0, minf=3 00:24:03.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:03.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.389 issued rwts: total=15922,8471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.389 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.389 00:24:03.389 Run status group 0 (all jobs): 00:24:03.389 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=249MiB (261MB), run=2007-2007msec 00:24:03.389 WRITE: bw=70.9MiB/s (74.3MB/s), 70.9MiB/s-70.9MiB/s (74.3MB/s-74.3MB/s), io=132MiB (139MB), run=1867-1867msec 00:24:03.389 14:42:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.389 14:42:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:24:03.389 14:42:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:24:03.389 14:42:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:24:03.389 14:42:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:24:03.389 14:42:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:24:03.389 14:42:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:03.389 14:42:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:03.389 14:42:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:24:03.389 14:42:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:24:03.389 14:42:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:03.389 14:42:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:24:03.954 Nvme0n1 00:24:03.954 14:42:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:24:04.212 14:42:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=82e9d226-ea8c-4479-8248-ade5167e63e4 00:24:04.212 14:42:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 82e9d226-ea8c-4479-8248-ade5167e63e4 00:24:04.212 14:42:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=82e9d226-ea8c-4479-8248-ade5167e63e4 00:24:04.212 14:42:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:24:04.212 14:42:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:24:04.212 14:42:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:24:04.212 14:42:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:04.470 14:42:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:24:04.470 { 00:24:04.470 "base_bdev": "Nvme0n1", 00:24:04.470 "block_size": 4096, 00:24:04.470 "cluster_size": 1073741824, 00:24:04.470 "free_clusters": 4, 00:24:04.470 "name": "lvs_0", 00:24:04.470 "total_data_clusters": 4, 00:24:04.470 "uuid": "82e9d226-ea8c-4479-8248-ade5167e63e4" 00:24:04.470 } 00:24:04.470 ]' 00:24:04.470 14:42:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="82e9d226-ea8c-4479-8248-ade5167e63e4") .free_clusters' 00:24:04.470 14:42:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:24:04.470 14:42:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="82e9d226-ea8c-4479-8248-ade5167e63e4") .cluster_size' 00:24:04.471 4096 00:24:04.471 14:42:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:24:04.471 14:42:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:24:04.471 14:42:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:24:04.471 14:42:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:24:04.729 f9e0e573-73ed-4bf6-80e0-547095411f4d 00:24:04.729 14:42:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:24:04.988 14:42:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:24:05.246 14:42:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:05.506 14:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:05.506 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:05.506 fio-3.35 00:24:05.506 Starting 1 thread 00:24:08.035 00:24:08.035 test: (groupid=0, jobs=1): err= 0: pid=106824: Wed Jul 10 14:42:20 2024 00:24:08.035 read: IOPS=6413, BW=25.1MiB/s (26.3MB/s)(50.3MiB/2008msec) 00:24:08.035 slat (usec): min=2, max=401, avg= 2.63, stdev= 5.10 00:24:08.035 clat (usec): min=3995, max=19055, avg=10483.60, stdev=936.96 00:24:08.035 lat (usec): min=4005, max=19058, avg=10486.23, stdev=936.64 00:24:08.035 clat percentiles (usec): 00:24:08.035 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9765], 00:24:08.035 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:24:08.035 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:24:08.035 | 99.00th=[12649], 99.50th=[13042], 99.90th=[17957], 99.95th=[18482], 00:24:08.035 | 99.99th=[19006] 00:24:08.035 bw ( KiB/s): min=24552, max=26280, per=99.84%, avg=25612.00, stdev=745.72, samples=4 00:24:08.035 iops : min= 6138, max= 6570, avg=6403.00, stdev=186.43, samples=4 00:24:08.035 write: IOPS=6415, BW=25.1MiB/s (26.3MB/s)(50.3MiB/2008msec); 0 zone resets 00:24:08.035 slat (usec): min=2, max=117, avg= 2.74, stdev= 1.57 00:24:08.035 clat (usec): min=2151, max=17153, avg=9361.20, stdev=819.71 00:24:08.035 lat (usec): min=2163, max=17155, avg=9363.94, stdev=819.48 00:24:08.035 clat percentiles (usec): 00:24:08.035 | 1.00th=[ 7504], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8717], 00:24:08.035 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:24:08.035 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10290], 95.00th=[10552], 00:24:08.035 | 99.00th=[11076], 99.50th=[11338], 99.90th=[14091], 99.95th=[16057], 00:24:08.035 | 99.99th=[17171] 00:24:08.035 bw ( KiB/s): min=25344, max=25920, per=99.97%, avg=25654.00, stdev=262.69, samples=4 00:24:08.035 iops : min= 6336, max= 6480, avg=6413.50, stdev=65.67, samples=4 00:24:08.035 lat (msec) : 4=0.05%, 10=54.73%, 20=45.22% 00:24:08.035 cpu : usr=69.71%, sys=23.07%, ctx=6, majf=0, minf=7 00:24:08.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:08.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:08.035 issued rwts: total=12878,12882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.035 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:08.035 00:24:08.035 Run status group 0 (all jobs): 00:24:08.035 READ: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.3MiB (52.7MB), run=2008-2008msec 00:24:08.035 WRITE: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.3MiB (52.8MB), run=2008-2008msec 00:24:08.035 14:42:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:08.293 14:42:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:24:08.551 14:42:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=e6eb8cc4-f288-4f69-b54b-24b6c7319b7f 00:24:08.551 14:42:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb e6eb8cc4-f288-4f69-b54b-24b6c7319b7f 00:24:08.551 14:42:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=e6eb8cc4-f288-4f69-b54b-24b6c7319b7f 00:24:08.551 14:42:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:24:08.551 14:42:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:24:08.551 14:42:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:24:08.551 14:42:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:08.809 14:42:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:24:08.809 { 00:24:08.809 "base_bdev": "Nvme0n1", 00:24:08.809 "block_size": 4096, 00:24:08.809 "cluster_size": 1073741824, 00:24:08.809 "free_clusters": 0, 00:24:08.809 "name": "lvs_0", 00:24:08.809 "total_data_clusters": 4, 00:24:08.809 "uuid": "82e9d226-ea8c-4479-8248-ade5167e63e4" 00:24:08.809 }, 00:24:08.809 { 00:24:08.809 "base_bdev": "f9e0e573-73ed-4bf6-80e0-547095411f4d", 00:24:08.809 "block_size": 4096, 00:24:08.809 "cluster_size": 4194304, 00:24:08.809 "free_clusters": 1022, 00:24:08.809 "name": "lvs_n_0", 00:24:08.809 "total_data_clusters": 1022, 00:24:08.809 "uuid": "e6eb8cc4-f288-4f69-b54b-24b6c7319b7f" 00:24:08.809 } 00:24:08.809 ]' 00:24:08.809 14:42:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e6eb8cc4-f288-4f69-b54b-24b6c7319b7f") .free_clusters' 00:24:08.809 14:42:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:24:08.809 14:42:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e6eb8cc4-f288-4f69-b54b-24b6c7319b7f") .cluster_size' 00:24:08.809 4088 00:24:08.809 14:42:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:24:08.809 14:42:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:24:08.809 14:42:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:24:08.809 14:42:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:24:09.067 d4c7db49-8aec-471b-a273-3d6364f3c35e 00:24:09.067 14:42:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:24:09.325 14:42:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:24:09.583 14:42:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:09.842 14:42:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:10.100 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:10.100 fio-3.35 00:24:10.100 Starting 1 thread 00:24:12.631 00:24:12.631 test: (groupid=0, jobs=1): err= 0: pid=106944: Wed Jul 10 14:42:24 2024 00:24:12.631 read: IOPS=5427, BW=21.2MiB/s (22.2MB/s)(42.6MiB/2011msec) 00:24:12.631 slat (usec): min=2, max=274, avg= 2.77, stdev= 3.45 00:24:12.631 clat (usec): min=4340, max=21063, avg=12430.54, stdev=1691.94 00:24:12.631 lat (usec): min=4349, max=21066, avg=12433.31, stdev=1691.78 00:24:12.631 clat percentiles (usec): 00:24:12.631 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10683], 20.00th=[11076], 00:24:12.631 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12387], 00:24:12.631 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14877], 95.00th=[15795], 00:24:12.631 | 99.00th=[17695], 99.50th=[18482], 99.90th=[20055], 99.95th=[20317], 00:24:12.631 | 99.99th=[21103] 00:24:12.631 bw ( KiB/s): min=19496, max=23400, per=99.92%, avg=21692.00, stdev=1644.99, samples=4 00:24:12.631 iops : min= 4874, max= 5850, avg=5423.00, stdev=411.25, samples=4 00:24:12.631 write: IOPS=5405, BW=21.1MiB/s (22.1MB/s)(42.5MiB/2011msec); 0 zone resets 00:24:12.631 slat (usec): min=2, max=221, avg= 2.87, stdev= 2.41 00:24:12.631 clat (usec): min=2023, max=20730, avg=11115.83, stdev=1577.04 00:24:12.631 lat (usec): min=2030, max=20733, avg=11118.70, stdev=1576.93 00:24:12.631 clat percentiles (usec): 00:24:12.631 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:24:12.631 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:24:12.631 | 70.00th=[11469], 80.00th=[12125], 90.00th=[13304], 95.00th=[14222], 00:24:12.631 | 99.00th=[15926], 99.50th=[16909], 99.90th=[19268], 99.95th=[20317], 00:24:12.631 | 99.99th=[20579] 00:24:12.631 bw ( KiB/s): min=19008, max=23168, per=100.00%, avg=21622.00, stdev=1915.74, samples=4 00:24:12.631 iops : min= 4752, max= 5792, avg=5405.50, stdev=478.94, samples=4 00:24:12.631 lat (msec) : 4=0.05%, 10=11.67%, 20=88.19%, 50=0.09% 00:24:12.631 cpu : usr=73.28%, sys=20.80%, ctx=8, majf=0, minf=7 00:24:12.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:12.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:12.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:12.631 issued rwts: total=10914,10870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:12.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:12.631 00:24:12.631 Run status group 0 (all jobs): 00:24:12.631 READ: bw=21.2MiB/s (22.2MB/s), 21.2MiB/s-21.2MiB/s (22.2MB/s-22.2MB/s), io=42.6MiB (44.7MB), run=2011-2011msec 00:24:12.631 WRITE: bw=21.1MiB/s (22.1MB/s), 21.1MiB/s-21.1MiB/s (22.1MB/s-22.1MB/s), io=42.5MiB (44.5MB), run=2011-2011msec 00:24:12.631 14:42:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:12.631 14:42:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:24:12.889 14:42:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:24:13.148 14:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:13.407 14:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:24:13.665 14:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:13.924 14:42:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.182 rmmod nvme_tcp 00:24:14.182 rmmod nvme_fabrics 00:24:14.182 rmmod nvme_keyring 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 106507 ']' 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 106507 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 106507 ']' 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 106507 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106507 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:14.182 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:14.183 killing process with pid 106507 00:24:14.183 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106507' 00:24:14.183 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 106507 00:24:14.183 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 106507 00:24:14.442 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:14.442 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:14.442 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:14.442 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.442 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:14.442 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.442 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.442 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.442 14:42:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:14.442 ************************************ 00:24:14.442 END TEST nvmf_fio_host 00:24:14.442 ************************************ 00:24:14.442 00:24:14.442 real 0m18.625s 00:24:14.442 user 1m22.503s 00:24:14.442 sys 0m4.295s 00:24:14.442 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:14.442 14:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.442 14:42:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:14.442 14:42:26 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:14.442 14:42:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:14.442 14:42:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.442 14:42:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.442 ************************************ 00:24:14.442 START TEST nvmf_failover 00:24:14.442 ************************************ 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:14.442 * Looking for test storage... 00:24:14.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.442 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.443 14:42:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:14.702 Cannot find device "nvmf_tgt_br" 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:14.702 Cannot find device "nvmf_tgt_br2" 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:14.702 Cannot find device "nvmf_tgt_br" 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:14.702 Cannot find device "nvmf_tgt_br2" 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:14.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:14.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:14.702 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:14.961 14:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:14.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:24:14.961 00:24:14.961 --- 10.0.0.2 ping statistics --- 00:24:14.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.961 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:14.961 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:14.961 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:24:14.961 00:24:14.961 --- 10.0.0.3 ping statistics --- 00:24:14.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.961 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:14.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:24:14.961 00:24:14.961 --- 10.0.0.1 ping statistics --- 00:24:14.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.961 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=107223 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 107223 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 107223 ']' 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:14.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:14.961 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:14.961 [2024-07-10 14:42:27.122455] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:24:14.961 [2024-07-10 14:42:27.122542] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.961 [2024-07-10 14:42:27.242728] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:15.220 [2024-07-10 14:42:27.262170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:15.220 [2024-07-10 14:42:27.297842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.220 [2024-07-10 14:42:27.297915] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.220 [2024-07-10 14:42:27.297937] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.220 [2024-07-10 14:42:27.297950] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.220 [2024-07-10 14:42:27.297962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.220 [2024-07-10 14:42:27.298085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.220 [2024-07-10 14:42:27.298697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:15.220 [2024-07-10 14:42:27.298712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.220 14:42:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:15.220 14:42:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:15.220 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:15.220 14:42:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:15.220 14:42:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:15.220 14:42:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.220 14:42:27 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:15.478 [2024-07-10 14:42:27.639733] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.478 14:42:27 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:15.737 Malloc0 00:24:15.737 14:42:27 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:15.994 14:42:28 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.251 14:42:28 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.508 [2024-07-10 14:42:28.640497] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.508 14:42:28 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:16.766 [2024-07-10 14:42:28.920743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:16.766 14:42:28 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:17.024 [2024-07-10 14:42:29.161146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:17.024 14:42:29 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=107317 00:24:17.024 14:42:29 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:17.024 14:42:29 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.024 14:42:29 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 107317 /var/tmp/bdevperf.sock 00:24:17.024 14:42:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 107317 ']' 00:24:17.024 14:42:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.024 14:42:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.024 14:42:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.024 14:42:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.024 14:42:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:17.281 14:42:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.281 14:42:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:17.281 14:42:29 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:17.843 NVMe0n1 00:24:17.843 14:42:29 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:18.099 00:24:18.099 14:42:30 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=107351 00:24:18.099 14:42:30 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:18.099 14:42:30 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:19.058 14:42:31 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.316 14:42:31 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:22.596 14:42:34 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:22.596 00:24:22.596 14:42:34 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:23.164 [2024-07-10 14:42:35.152482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.152998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.164 [2024-07-10 14:42:35.153218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 [2024-07-10 14:42:35.153617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90ff0 is same with the state(5) to be set 00:24:23.165 14:42:35 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:26.446 14:42:38 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.446 [2024-07-10 14:42:38.450834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.446 14:42:38 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:27.380 14:42:39 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:27.639 [2024-07-10 14:42:39.768870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.769538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.769632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.769691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.769744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.769799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.769866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.769923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.769985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.770969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.771962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.772993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.773047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.773111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.773197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.639 [2024-07-10 14:42:39.773267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.773365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.773436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.773506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.773571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.773625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.773691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.773769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.773824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.773889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.773965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.774935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.775011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.775105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.775176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.775242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.775346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.775432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 [2024-07-10 14:42:39.775522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf916d0 is same with the state(5) to be set 00:24:27.640 14:42:39 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 107351 00:24:34.207 0 00:24:34.207 14:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 107317 00:24:34.207 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 107317 ']' 00:24:34.207 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 107317 00:24:34.207 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:34.207 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.207 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107317 00:24:34.207 killing process with pid 107317 00:24:34.207 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:34.207 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:34.207 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107317' 00:24:34.207 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 107317 00:24:34.207 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 107317 00:24:34.207 14:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:34.207 [2024-07-10 14:42:29.248388] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:24:34.207 [2024-07-10 14:42:29.248665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107317 ] 00:24:34.207 [2024-07-10 14:42:29.375136] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:34.207 [2024-07-10 14:42:29.398693] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.207 [2024-07-10 14:42:29.441430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.207 Running I/O for 15 seconds... 00:24:34.207 [2024-07-10 14:42:31.472115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.207 [2024-07-10 14:42:31.472186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.207 [2024-07-10 14:42:31.472233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.207 [2024-07-10 14:42:31.472265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.207 [2024-07-10 14:42:31.472314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.207 [2024-07-10 14:42:31.472344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.207 [2024-07-10 14:42:31.472374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.207 [2024-07-10 14:42:31.472404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.207 [2024-07-10 14:42:31.472434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.207 [2024-07-10 14:42:31.472464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.207 [2024-07-10 14:42:31.472494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.207 [2024-07-10 14:42:31.472564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.472597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.472626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.472655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.472685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.472716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.472745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.472782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.472814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.472857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.472888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.472918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.472948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.472973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.472988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.473003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.473017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.473033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.473046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.473062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.473076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.473102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.473116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.473132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.473146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.473161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.473175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.473191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.473205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.473220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.473235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.473251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.207 [2024-07-10 14:42:31.473265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.207 [2024-07-10 14:42:31.473292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.473975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.473989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.208 [2024-07-10 14:42:31.474617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.208 [2024-07-10 14:42:31.474632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.474646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.474662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.474676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.474692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.474706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.474722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.474736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.474752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.474765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.474781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.474795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.474811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.474825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.474840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.474854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.474872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.474886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.474902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.474917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.474938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.474952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.474968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.474982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.474998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.209 [2024-07-10 14:42:31.475556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.209 [2024-07-10 14:42:31.475585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.209 [2024-07-10 14:42:31.475614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.209 [2024-07-10 14:42:31.475644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.209 [2024-07-10 14:42:31.475673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.209 [2024-07-10 14:42:31.475702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.209 [2024-07-10 14:42:31.475738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.209 [2024-07-10 14:42:31.475767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.209 [2024-07-10 14:42:31.475799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.209 [2024-07-10 14:42:31.475829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.209 [2024-07-10 14:42:31.475858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.209 [2024-07-10 14:42:31.475889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.209 [2024-07-10 14:42:31.475905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.209 [2024-07-10 14:42:31.475919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.475934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.210 [2024-07-10 14:42:31.475949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.475964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.210 [2024-07-10 14:42:31.475978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.476000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:31.476013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.476029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:31.476043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.476059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:31.476072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.476088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:31.476102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.476124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:31.476139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.476154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:31.476168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.476184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:31.476199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.476231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.210 [2024-07-10 14:42:31.476245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.210 [2024-07-10 14:42:31.476257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82320 len:8 PRP1 0x0 PRP2 0x0 00:24:34.210 [2024-07-10 14:42:31.476270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.476336] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20255e0 was disconnected and freed. reset controller. 00:24:34.210 [2024-07-10 14:42:31.476357] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:34.210 [2024-07-10 14:42:31.476417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.210 [2024-07-10 14:42:31.476438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.476454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.210 [2024-07-10 14:42:31.476467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.476484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.210 [2024-07-10 14:42:31.476498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.476512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.210 [2024-07-10 14:42:31.476525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:31.476539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.210 [2024-07-10 14:42:31.476591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fff240 (9): Bad file descriptor 00:24:34.210 [2024-07-10 14:42:31.480588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.210 [2024-07-10 14:42:31.518100] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.210 [2024-07-10 14:42:35.153846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.153894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.153924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.153970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.153990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.210 [2024-07-10 14:42:35.154717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.210 [2024-07-10 14:42:35.154732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.154746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.154762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.154786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.154804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.154818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.154834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.154848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.154863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.154878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.154893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.154907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.154923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.154943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.154958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.154976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.154994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.211 [2024-07-10 14:42:35.155781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.211 [2024-07-10 14:42:35.155795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.155812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.212 [2024-07-10 14:42:35.155826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.155842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.212 [2024-07-10 14:42:35.155856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.155873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.212 [2024-07-10 14:42:35.155887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.155903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.212 [2024-07-10 14:42:35.155917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.155933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.212 [2024-07-10 14:42:35.155947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.155963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.212 [2024-07-10 14:42:35.155986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.212 [2024-07-10 14:42:35.156018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.212 [2024-07-10 14:42:35.156049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.212 [2024-07-10 14:42:35.156078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.212 [2024-07-10 14:42:35.156108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.212 [2024-07-10 14:42:35.156138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.212 [2024-07-10 14:42:35.156168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.212 [2024-07-10 14:42:35.156199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.156972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.156986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.157002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.157018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.157035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.157049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.157064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.157079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.157094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.157108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.157124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.212 [2024-07-10 14:42:35.157138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.212 [2024-07-10 14:42:35.157154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.213 [2024-07-10 14:42:35.157168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.213 [2024-07-10 14:42:35.157206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.213 [2024-07-10 14:42:35.157237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.213 [2024-07-10 14:42:35.157267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.213 [2024-07-10 14:42:35.157314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.157965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:35.157986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.158001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20264e0 is same with the state(5) to be set 00:24:34.213 [2024-07-10 14:42:35.158030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.213 [2024-07-10 14:42:35.158042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.213 [2024-07-10 14:42:35.158054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84880 len:8 PRP1 0x0 PRP2 0x0 00:24:34.213 [2024-07-10 14:42:35.158067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.158130] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20264e0 was disconnected and freed. reset controller. 00:24:34.213 [2024-07-10 14:42:35.158149] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:34.213 [2024-07-10 14:42:35.158231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.213 [2024-07-10 14:42:35.158252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.158268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.213 [2024-07-10 14:42:35.158296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.158313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.213 [2024-07-10 14:42:35.158327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.158341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.213 [2024-07-10 14:42:35.158354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:35.158368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.213 [2024-07-10 14:42:35.158439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fff240 (9): Bad file descriptor 00:24:34.213 [2024-07-10 14:42:35.162478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.213 [2024-07-10 14:42:35.202368] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.213 [2024-07-10 14:42:39.768293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.213 [2024-07-10 14:42:39.768784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:39.768908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.213 [2024-07-10 14:42:39.768935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:39.768952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.213 [2024-07-10 14:42:39.768965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:39.768979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.213 [2024-07-10 14:42:39.768991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:39.769005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fff240 is same with the state(5) to be set 00:24:34.213 [2024-07-10 14:42:39.775860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:39.775901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:39.775930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:39.775947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:39.775963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.213 [2024-07-10 14:42:39.775977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.213 [2024-07-10 14:42:39.775993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-07-10 14:42:39.776741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.776782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.776812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.776855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.776888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.776918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.776949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.776980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.776996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.777009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.777025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.777039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.777055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.777068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.777084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.777097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.777113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.777127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.777152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.777167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.777182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.777196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.777212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.777226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.777241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.777256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.777272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.777300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.777319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.777334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.777349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.777364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.777380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-07-10 14:42:39.777394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-07-10 14:42:39.777410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.777981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.777996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-07-10 14:42:39.778530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.215 [2024-07-10 14:42:39.778546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.778560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.778575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.778589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.778605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.778622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.778648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.778671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.778689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.778708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.778736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.778752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.778768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.778797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.778814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.778829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.778845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.778859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.778875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.778889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.778905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.778919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.778935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.778948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.778964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.778978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.778994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.216 [2024-07-10 14:42:39.779864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-07-10 14:42:39.779878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.217 [2024-07-10 14:42:39.779894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-07-10 14:42:39.779908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.217 [2024-07-10 14:42:39.779923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-07-10 14:42:39.779937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.217 [2024-07-10 14:42:39.779953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-07-10 14:42:39.779981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.217 [2024-07-10 14:42:39.780009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffc980 is same with the state(5) to be set 00:24:34.217 [2024-07-10 14:42:39.780032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.217 [2024-07-10 14:42:39.780043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.217 [2024-07-10 14:42:39.780055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:8 PRP1 0x0 PRP2 0x0 00:24:34.217 [2024-07-10 14:42:39.780069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.217 [2024-07-10 14:42:39.780118] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ffc980 was disconnected and freed. reset controller. 00:24:34.217 [2024-07-10 14:42:39.780135] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:34.217 [2024-07-10 14:42:39.780150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.217 [2024-07-10 14:42:39.780199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fff240 (9): Bad file descriptor 00:24:34.217 [2024-07-10 14:42:39.784212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.217 [2024-07-10 14:42:39.818541] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.217 00:24:34.217 Latency(us) 00:24:34.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.217 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:34.217 Verification LBA range: start 0x0 length 0x4000 00:24:34.217 NVMe0n1 : 15.01 8634.88 33.73 223.18 0.00 14417.57 640.47 20018.27 00:24:34.217 =================================================================================================================== 00:24:34.217 Total : 8634.88 33.73 223.18 0.00 14417.57 640.47 20018.27 00:24:34.217 Received shutdown signal, test time was about 15.000000 seconds 00:24:34.217 00:24:34.217 Latency(us) 00:24:34.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.217 =================================================================================================================== 00:24:34.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:34.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=107548 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 107548 /var/tmp/bdevperf.sock 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 107548 ']' 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:34.217 14:42:45 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:34.217 [2024-07-10 14:42:46.060519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:34.217 14:42:46 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:34.217 [2024-07-10 14:42:46.304754] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:34.217 14:42:46 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.476 NVMe0n1 00:24:34.476 14:42:46 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.734 00:24:34.734 14:42:46 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.993 00:24:34.993 14:42:47 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:34.993 14:42:47 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:35.252 14:42:47 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:35.510 14:42:47 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:38.832 14:42:50 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:38.832 14:42:50 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:38.832 14:42:51 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=107666 00:24:38.832 14:42:51 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:38.832 14:42:51 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 107666 00:24:40.209 0 00:24:40.209 14:42:52 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:40.209 [2024-07-10 14:42:45.564476] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:24:40.209 [2024-07-10 14:42:45.564596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107548 ] 00:24:40.209 [2024-07-10 14:42:45.686653] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:40.209 [2024-07-10 14:42:45.706269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.209 [2024-07-10 14:42:45.743394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.209 [2024-07-10 14:42:47.716027] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:40.209 [2024-07-10 14:42:47.716157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.209 [2024-07-10 14:42:47.716183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.209 [2024-07-10 14:42:47.716202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.209 [2024-07-10 14:42:47.716217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.209 [2024-07-10 14:42:47.716232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.209 [2024-07-10 14:42:47.716245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.209 [2024-07-10 14:42:47.716260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.209 [2024-07-10 14:42:47.716273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.209 [2024-07-10 14:42:47.716303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.209 [2024-07-10 14:42:47.716349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.209 [2024-07-10 14:42:47.716383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd02240 (9): Bad file descriptor 00:24:40.209 [2024-07-10 14:42:47.724658] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:40.209 Running I/O for 1 seconds... 00:24:40.209 00:24:40.209 Latency(us) 00:24:40.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.209 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:40.209 Verification LBA range: start 0x0 length 0x4000 00:24:40.209 NVMe0n1 : 1.01 8512.66 33.25 0.00 0.00 14953.10 2308.65 16205.27 00:24:40.209 =================================================================================================================== 00:24:40.209 Total : 8512.66 33.25 0.00 0.00 14953.10 2308.65 16205.27 00:24:40.209 14:42:52 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.209 14:42:52 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:40.209 14:42:52 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.468 14:42:52 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.468 14:42:52 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:40.727 14:42:52 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.984 14:42:53 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:44.269 14:42:56 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:44.269 14:42:56 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:44.269 14:42:56 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 107548 00:24:44.269 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 107548 ']' 00:24:44.269 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 107548 00:24:44.269 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:44.269 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.269 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107548 00:24:44.269 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:44.269 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:44.269 killing process with pid 107548 00:24:44.269 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107548' 00:24:44.269 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 107548 00:24:44.269 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 107548 00:24:44.528 14:42:56 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:44.528 14:42:56 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:44.786 14:42:56 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:44.786 14:42:56 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:44.786 14:42:56 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:44.786 14:42:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:44.787 rmmod nvme_tcp 00:24:44.787 rmmod nvme_fabrics 00:24:44.787 rmmod nvme_keyring 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 107223 ']' 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 107223 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 107223 ']' 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 107223 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.787 14:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107223 00:24:44.787 14:42:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:44.787 14:42:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:44.787 killing process with pid 107223 00:24:44.787 14:42:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107223' 00:24:44.787 14:42:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 107223 00:24:44.787 14:42:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 107223 00:24:45.050 14:42:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:45.050 14:42:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:45.050 14:42:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:45.050 14:42:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.050 14:42:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:45.050 14:42:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.050 14:42:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.050 14:42:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.050 14:42:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:45.051 00:24:45.051 real 0m30.590s 00:24:45.051 user 1m59.745s 00:24:45.051 sys 0m4.427s 00:24:45.051 14:42:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:45.051 14:42:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.051 ************************************ 00:24:45.051 END TEST nvmf_failover 00:24:45.051 ************************************ 00:24:45.051 14:42:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:45.051 14:42:57 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:45.051 14:42:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:45.051 14:42:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.051 14:42:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:45.051 ************************************ 00:24:45.051 START TEST nvmf_host_discovery 00:24:45.051 ************************************ 00:24:45.051 14:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:45.352 * Looking for test storage... 00:24:45.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:45.352 Cannot find device "nvmf_tgt_br" 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:45.352 Cannot find device "nvmf_tgt_br2" 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:45.352 Cannot find device "nvmf_tgt_br" 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:45.352 Cannot find device "nvmf_tgt_br2" 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:45.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:45.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:45.352 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:45.353 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:45.353 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:45.353 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:45.353 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:45.353 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:45.353 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:45.353 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:45.353 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:45.353 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:45.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:24:45.611 00:24:45.611 --- 10.0.0.2 ping statistics --- 00:24:45.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.611 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:45.611 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:45.611 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:24:45.611 00:24:45.611 --- 10.0.0.3 ping statistics --- 00:24:45.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.611 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:45.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:24:45.611 00:24:45.611 --- 10.0.0.1 ping statistics --- 00:24:45.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.611 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=107976 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 107976 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 107976 ']' 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.611 14:42:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.611 [2024-07-10 14:42:57.786436] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:24:45.611 [2024-07-10 14:42:57.786534] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.870 [2024-07-10 14:42:57.909453] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:45.870 [2024-07-10 14:42:57.929635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.870 [2024-07-10 14:42:57.970330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.870 [2024-07-10 14:42:57.970384] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.870 [2024-07-10 14:42:57.970404] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.870 [2024-07-10 14:42:57.970421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.870 [2024-07-10 14:42:57.970434] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.870 [2024-07-10 14:42:57.970470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.805 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.806 [2024-07-10 14:42:58.819831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.806 [2024-07-10 14:42:58.831953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.806 null0 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.806 null1 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=108026 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 108026 /tmp/host.sock 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 108026 ']' 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.806 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.806 14:42:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.806 [2024-07-10 14:42:58.927836] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:24:46.806 [2024-07-10 14:42:58.927933] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108026 ] 00:24:46.806 [2024-07-10 14:42:59.050374] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:46.806 [2024-07-10 14:42:59.065506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.065 [2024-07-10 14:42:59.107398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.002 14:42:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:48.002 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.002 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:48.002 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:48.002 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.002 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.002 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:48.003 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.262 [2024-07-10 14:43:00.308406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:48.262 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:48.263 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.521 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:24:48.521 14:43:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:48.779 [2024-07-10 14:43:00.943051] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:48.779 [2024-07-10 14:43:00.943094] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:48.779 [2024-07-10 14:43:00.943113] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:48.779 [2024-07-10 14:43:01.029237] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:49.036 [2024-07-10 14:43:01.086389] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:49.036 [2024-07-10 14:43:01.086451] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:49.294 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:49.294 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:49.294 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:49.294 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:49.294 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.294 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.294 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:49.294 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:49.294 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:49.294 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:49.553 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.812 [2024-07-10 14:43:01.905121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:49.812 [2024-07-10 14:43:01.905646] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:49.812 [2024-07-10 14:43:01.905680] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:49.812 14:43:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:49.812 [2024-07-10 14:43:01.991727] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:49.812 14:43:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.812 14:43:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.813 [2024-07-10 14:43:02.056068] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:49.813 [2024-07-10 14:43:02.056102] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:49.813 [2024-07-10 14:43:02.056110] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:49.813 14:43:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.189 [2024-07-10 14:43:03.194008] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:51.189 [2024-07-10 14:43:03.194238] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:51.189 [2024-07-10 14:43:03.203884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.189 [2024-07-10 14:43:03.203928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.189 [2024-07-10 14:43:03.203944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.189 [2024-07-10 14:43:03.203954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.189 [2024-07-10 14:43:03.203964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.189 [2024-07-10 14:43:03.203973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.189 [2024-07-10 14:43:03.203983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.189 [2024-07-10 14:43:03.203993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.189 [2024-07-10 14:43:03.204002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213c30 is same with the state(5) to be set 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:51.189 [2024-07-10 14:43:03.213812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213c30 (9): Bad file descriptor 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.189 [2024-07-10 14:43:03.223841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:51.189 [2024-07-10 14:43:03.224015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.189 [2024-07-10 14:43:03.224042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2213c30 with addr=10.0.0.2, port=4420 00:24:51.189 [2024-07-10 14:43:03.224055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213c30 is same with the state(5) to be set 00:24:51.189 [2024-07-10 14:43:03.224077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213c30 (9): Bad file descriptor 00:24:51.189 [2024-07-10 14:43:03.224107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:51.189 [2024-07-10 14:43:03.224118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:51.189 [2024-07-10 14:43:03.224130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:51.189 [2024-07-10 14:43:03.224147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.189 [2024-07-10 14:43:03.233926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:51.189 [2024-07-10 14:43:03.234117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.189 [2024-07-10 14:43:03.234144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2213c30 with addr=10.0.0.2, port=4420 00:24:51.189 [2024-07-10 14:43:03.234158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213c30 is same with the state(5) to be set 00:24:51.189 [2024-07-10 14:43:03.234180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213c30 (9): Bad file descriptor 00:24:51.189 [2024-07-10 14:43:03.234209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:51.189 [2024-07-10 14:43:03.234221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:51.189 [2024-07-10 14:43:03.234232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:51.189 [2024-07-10 14:43:03.234248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.189 [2024-07-10 14:43:03.244033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:51.189 [2024-07-10 14:43:03.244192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.189 [2024-07-10 14:43:03.244219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2213c30 with addr=10.0.0.2, port=4420 00:24:51.189 [2024-07-10 14:43:03.244232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213c30 is same with the state(5) to be set 00:24:51.189 [2024-07-10 14:43:03.244254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213c30 (9): Bad file descriptor 00:24:51.189 [2024-07-10 14:43:03.244295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:51.189 [2024-07-10 14:43:03.244314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:51.189 [2024-07-10 14:43:03.244325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:51.189 [2024-07-10 14:43:03.244341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:51.189 [2024-07-10 14:43:03.254130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:51.189 [2024-07-10 14:43:03.254266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.189 [2024-07-10 14:43:03.254306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2213c30 with addr=10.0.0.2, port=4420 00:24:51.189 [2024-07-10 14:43:03.254321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213c30 is same with the state(5) to be set 00:24:51.189 [2024-07-10 14:43:03.254342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213c30 (9): Bad file descriptor 00:24:51.189 [2024-07-10 14:43:03.254372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:51.189 [2024-07-10 14:43:03.254383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:51.189 [2024-07-10 14:43:03.254394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:51.189 [2024-07-10 14:43:03.254410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:51.189 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.189 [2024-07-10 14:43:03.264200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:51.189 [2024-07-10 14:43:03.264369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.189 [2024-07-10 14:43:03.264397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2213c30 with addr=10.0.0.2, port=4420 00:24:51.189 [2024-07-10 14:43:03.264411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213c30 is same with the state(5) to be set 00:24:51.190 [2024-07-10 14:43:03.264431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213c30 (9): Bad file descriptor 00:24:51.190 [2024-07-10 14:43:03.264448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:51.190 [2024-07-10 14:43:03.264458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:51.190 [2024-07-10 14:43:03.264469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:51.190 [2024-07-10 14:43:03.264485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.190 [2024-07-10 14:43:03.274288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:51.190 [2024-07-10 14:43:03.274432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.190 [2024-07-10 14:43:03.274458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2213c30 with addr=10.0.0.2, port=4420 00:24:51.190 [2024-07-10 14:43:03.274471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213c30 is same with the state(5) to be set 00:24:51.190 [2024-07-10 14:43:03.274491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213c30 (9): Bad file descriptor 00:24:51.190 [2024-07-10 14:43:03.274507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:51.190 [2024-07-10 14:43:03.274517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:51.190 [2024-07-10 14:43:03.274528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:51.190 [2024-07-10 14:43:03.274544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.190 [2024-07-10 14:43:03.281217] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:51.190 [2024-07-10 14:43:03.281278] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.190 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.447 14:43:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.381 [2024-07-10 14:43:04.613590] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:52.381 [2024-07-10 14:43:04.613632] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:52.381 [2024-07-10 14:43:04.613651] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:52.639 [2024-07-10 14:43:04.699704] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:52.639 [2024-07-10 14:43:04.759939] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:52.639 [2024-07-10 14:43:04.759988] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:52.639 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.639 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:52.639 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:52.639 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:52.639 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:52.639 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:52.639 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:52.639 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:52.639 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.640 2024/07/10 14:43:04 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:52.640 request: 00:24:52.640 { 00:24:52.640 "method": "bdev_nvme_start_discovery", 00:24:52.640 "params": { 00:24:52.640 "name": "nvme", 00:24:52.640 "trtype": "tcp", 00:24:52.640 "traddr": "10.0.0.2", 00:24:52.640 "adrfam": "ipv4", 00:24:52.640 "trsvcid": "8009", 00:24:52.640 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:52.640 "wait_for_attach": true 00:24:52.640 } 00:24:52.640 } 00:24:52.640 Got JSON-RPC error response 00:24:52.640 GoRPCClient: error on JSON-RPC call 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.640 2024/07/10 14:43:04 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:52.640 request: 00:24:52.640 { 00:24:52.640 "method": "bdev_nvme_start_discovery", 00:24:52.640 "params": { 00:24:52.640 "name": "nvme_second", 00:24:52.640 "trtype": "tcp", 00:24:52.640 "traddr": "10.0.0.2", 00:24:52.640 "adrfam": "ipv4", 00:24:52.640 "trsvcid": "8009", 00:24:52.640 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:52.640 "wait_for_attach": true 00:24:52.640 } 00:24:52.640 } 00:24:52.640 Got JSON-RPC error response 00:24:52.640 GoRPCClient: error on JSON-RPC call 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:52.640 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:52.898 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.898 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:52.898 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:52.898 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.898 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:52.898 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.898 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:52.898 14:43:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.898 14:43:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:52.898 14:43:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.898 14:43:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:52.898 14:43:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:52.898 14:43:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:52.898 14:43:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:52.898 14:43:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:52.898 14:43:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:52.898 14:43:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:52.898 14:43:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:52.898 14:43:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:52.898 14:43:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.898 14:43:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.832 [2024-07-10 14:43:06.041067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-10 14:43:06.041341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22302a0 with addr=10.0.0.2, port=8010 00:24:53.832 [2024-07-10 14:43:06.041375] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:53.832 [2024-07-10 14:43:06.041387] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:53.832 [2024-07-10 14:43:06.041397] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:54.765 [2024-07-10 14:43:07.041048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.765 [2024-07-10 14:43:07.041123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22302a0 with addr=10.0.0.2, port=8010 00:24:54.765 [2024-07-10 14:43:07.041145] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:54.765 [2024-07-10 14:43:07.041156] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:54.765 [2024-07-10 14:43:07.041166] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:56.136 [2024-07-10 14:43:08.040903] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:56.136 2024/07/10 14:43:08 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:24:56.136 request: 00:24:56.136 { 00:24:56.136 "method": "bdev_nvme_start_discovery", 00:24:56.136 "params": { 00:24:56.136 "name": "nvme_second", 00:24:56.136 "trtype": "tcp", 00:24:56.136 "traddr": "10.0.0.2", 00:24:56.136 "adrfam": "ipv4", 00:24:56.136 "trsvcid": "8010", 00:24:56.136 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:56.136 "wait_for_attach": false, 00:24:56.136 "attach_timeout_ms": 3000 00:24:56.136 } 00:24:56.136 } 00:24:56.136 Got JSON-RPC error response 00:24:56.136 GoRPCClient: error on JSON-RPC call 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 108026 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:56.136 rmmod nvme_tcp 00:24:56.136 rmmod nvme_fabrics 00:24:56.136 rmmod nvme_keyring 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 107976 ']' 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 107976 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 107976 ']' 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 107976 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107976 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:56.136 killing process with pid 107976 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107976' 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 107976 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 107976 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:56.136 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.137 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.137 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.137 14:43:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:56.137 00:24:56.137 real 0m11.108s 00:24:56.137 user 0m22.087s 00:24:56.137 sys 0m1.540s 00:24:56.137 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:56.137 14:43:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.137 ************************************ 00:24:56.137 END TEST nvmf_host_discovery 00:24:56.137 ************************************ 00:24:56.137 14:43:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:56.137 14:43:08 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:56.137 14:43:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:56.137 14:43:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:56.137 14:43:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:56.394 ************************************ 00:24:56.394 START TEST nvmf_host_multipath_status 00:24:56.394 ************************************ 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:56.394 * Looking for test storage... 00:24:56.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.394 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:56.395 Cannot find device "nvmf_tgt_br" 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:56.395 Cannot find device "nvmf_tgt_br2" 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:56.395 Cannot find device "nvmf_tgt_br" 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:56.395 Cannot find device "nvmf_tgt_br2" 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:56.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:56.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:56.395 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:56.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:24:56.653 00:24:56.653 --- 10.0.0.2 ping statistics --- 00:24:56.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.653 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:56.653 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:56.653 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:24:56.653 00:24:56.653 --- 10.0.0.3 ping statistics --- 00:24:56.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.653 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:56.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:56.653 00:24:56.653 --- 10.0.0.1 ping statistics --- 00:24:56.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.653 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:56.653 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=108511 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 108511 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 108511 ']' 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.654 14:43:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:56.968 [2024-07-10 14:43:08.962416] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:24:56.968 [2024-07-10 14:43:08.962977] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.968 [2024-07-10 14:43:09.085259] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:56.968 [2024-07-10 14:43:09.100082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:56.968 [2024-07-10 14:43:09.141810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.968 [2024-07-10 14:43:09.142082] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.968 [2024-07-10 14:43:09.142266] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.968 [2024-07-10 14:43:09.142472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.968 [2024-07-10 14:43:09.142517] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.968 [2024-07-10 14:43:09.142726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.968 [2024-07-10 14:43:09.142740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.968 14:43:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:56.968 14:43:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:56.968 14:43:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:56.968 14:43:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:56.968 14:43:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:57.226 14:43:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.226 14:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=108511 00:24:57.226 14:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:57.485 [2024-07-10 14:43:09.537565] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.485 14:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:57.742 Malloc0 00:24:57.742 14:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:58.000 14:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:58.260 14:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.517 [2024-07-10 14:43:10.716035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.517 14:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:59.083 [2024-07-10 14:43:11.096327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:59.084 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:59.084 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=108601 00:24:59.084 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:59.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:59.084 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 108601 /var/tmp/bdevperf.sock 00:24:59.084 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 108601 ']' 00:24:59.084 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:59.084 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:59.084 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:59.084 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:59.084 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:59.342 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.342 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:59.342 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:59.600 14:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:59.859 Nvme0n1 00:24:59.859 14:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:00.426 Nvme0n1 00:25:00.426 14:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:00.426 14:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:02.325 14:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:02.325 14:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:02.583 14:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:02.841 14:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:03.776 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:03.776 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:03.776 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.776 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:04.034 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.034 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:04.034 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.034 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:04.292 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:04.292 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:04.292 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.292 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:04.858 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.858 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:04.858 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.858 14:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:04.858 14:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.858 14:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:04.858 14:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.858 14:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:05.424 14:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.424 14:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:05.424 14:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.424 14:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:05.424 14:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.424 14:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:05.424 14:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:05.992 14:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:05.992 14:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:07.368 14:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:07.368 14:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:07.368 14:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:07.368 14:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.368 14:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:07.368 14:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:07.368 14:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.368 14:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:07.627 14:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.627 14:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:07.627 14:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.627 14:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:07.886 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.886 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:07.887 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:07.887 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.145 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.145 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:08.145 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.145 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:08.404 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.404 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:08.404 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.404 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:08.663 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.664 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:08.664 14:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:08.922 14:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:09.180 14:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:10.116 14:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:10.116 14:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:10.116 14:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.116 14:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:10.374 14:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.374 14:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:10.374 14:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:10.374 14:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.939 14:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.939 14:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:10.939 14:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.939 14:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:11.197 14:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.197 14:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:11.197 14:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.197 14:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:11.456 14:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.456 14:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:11.456 14:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.456 14:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:11.715 14:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.715 14:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:11.715 14:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.715 14:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:11.974 14:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.974 14:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:11.974 14:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:12.233 14:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:12.233 14:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:13.607 14:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:13.607 14:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:13.607 14:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.607 14:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.607 14:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.607 14:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:13.607 14:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.607 14:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:13.865 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.865 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:13.865 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.865 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:14.124 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.124 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:14.124 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.124 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:14.382 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.382 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:14.382 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.382 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:14.640 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.640 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:14.640 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:14.640 14:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.898 14:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:14.898 14:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:14.898 14:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:15.196 14:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:15.454 14:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:16.387 14:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:16.387 14:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:16.387 14:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.387 14:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:16.954 14:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.954 14:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:16.954 14:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.954 14:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:16.954 14:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.954 14:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:16.954 14:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.954 14:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:17.520 14:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.520 14:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:17.520 14:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.520 14:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:17.778 14:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.778 14:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:17.778 14:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:17.778 14:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.036 14:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.036 14:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:18.036 14:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.036 14:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:18.294 14:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.294 14:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:18.294 14:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:18.551 14:43:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:18.809 14:43:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:20.184 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:20.184 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:20.184 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.184 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:20.184 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:20.184 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:20.184 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.184 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:20.442 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.442 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:20.442 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.442 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:20.700 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.700 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:20.700 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.700 14:43:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:20.957 14:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.957 14:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:20.957 14:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.957 14:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:21.522 14:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.522 14:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:21.522 14:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.522 14:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:21.780 14:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.781 14:43:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:22.064 14:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:22.064 14:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:22.342 14:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:22.342 14:43:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:23.715 14:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:23.715 14:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:23.715 14:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.715 14:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:23.715 14:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.715 14:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:23.715 14:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.715 14:43:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:23.973 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.973 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:23.973 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.973 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:24.231 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.231 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:24.231 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.231 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:24.488 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.488 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:24.488 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.488 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:24.746 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.746 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:24.746 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:24.746 14:43:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.004 14:43:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.004 14:43:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:25.004 14:43:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:25.568 14:43:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:25.568 14:43:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:26.942 14:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:26.942 14:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:26.942 14:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.942 14:43:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:26.942 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.942 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:26.942 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.942 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:27.200 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.200 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:27.200 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.200 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:27.457 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.457 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:27.457 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.457 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.715 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.715 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:27.715 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.715 14:43:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.996 14:43:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.996 14:43:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:27.996 14:43:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.996 14:43:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:28.255 14:43:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.255 14:43:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:28.255 14:43:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:28.514 14:43:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:28.772 14:43:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:30.147 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:30.147 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:30.147 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.147 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:30.147 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.147 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:30.147 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.147 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:30.406 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.406 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:30.406 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.406 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:30.664 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.664 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:30.664 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.664 14:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.922 14:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.922 14:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:30.922 14:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.922 14:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:31.179 14:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.179 14:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:31.179 14:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:31.179 14:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.745 14:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.745 14:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:31.745 14:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:31.745 14:43:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:32.003 14:43:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:33.379 14:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:33.379 14:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:33.379 14:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.379 14:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:33.379 14:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.379 14:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:33.379 14:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.379 14:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:33.637 14:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:33.637 14:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:33.637 14:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.637 14:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:34.203 14:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.203 14:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:34.203 14:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:34.203 14:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.203 14:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.203 14:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:34.461 14:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:34.461 14:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.720 14:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.720 14:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:34.720 14:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.720 14:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:34.979 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.979 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 108601 00:25:34.979 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 108601 ']' 00:25:34.979 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 108601 00:25:34.979 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:34.979 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:34.979 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108601 00:25:34.979 killing process with pid 108601 00:25:34.979 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:34.979 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:34.979 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108601' 00:25:34.979 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 108601 00:25:34.979 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 108601 00:25:34.979 Connection closed with partial response: 00:25:34.979 00:25:34.979 00:25:35.240 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 108601 00:25:35.240 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:35.240 [2024-07-10 14:43:11.177271] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:25:35.240 [2024-07-10 14:43:11.177489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108601 ] 00:25:35.240 [2024-07-10 14:43:11.299067] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:35.240 [2024-07-10 14:43:11.313766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.240 [2024-07-10 14:43:11.356060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.240 Running I/O for 90 seconds... 00:25:35.240 [2024-07-10 14:43:27.391166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.240 [2024-07-10 14:43:27.391248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:35.240 [2024-07-10 14:43:27.391321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.240 [2024-07-10 14:43:27.391346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:35.240 [2024-07-10 14:43:27.391370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.240 [2024-07-10 14:43:27.391386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:35.240 [2024-07-10 14:43:27.391408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.240 [2024-07-10 14:43:27.391424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:35.240 [2024-07-10 14:43:27.391447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.240 [2024-07-10 14:43:27.391462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.391484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.391499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.391521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.391537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.391559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.391575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.391710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.391736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.391763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.391780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.391826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.391844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.391867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.391882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.391904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.391920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.391942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.391965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.392781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.392799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.393203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.393256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.393328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.393373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.393413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.393454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.393495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-07-10 14:43:27.393535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.241 [2024-07-10 14:43:27.393576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.241 [2024-07-10 14:43:27.393618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.241 [2024-07-10 14:43:27.393659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.241 [2024-07-10 14:43:27.393700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.241 [2024-07-10 14:43:27.393761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.241 [2024-07-10 14:43:27.393809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:35.241 [2024-07-10 14:43:27.393833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.393849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.393883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.393900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.393926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.393942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.393969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.393986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.394958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.394985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.242 [2024-07-10 14:43:27.395101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:35.242 [2024-07-10 14:43:27.395855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.242 [2024-07-10 14:43:27.395871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.395898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.395914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.395941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.395958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.395984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.396614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.243 [2024-07-10 14:43:27.396656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.243 [2024-07-10 14:43:27.396700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.243 [2024-07-10 14:43:27.396742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.243 [2024-07-10 14:43:27.396785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.243 [2024-07-10 14:43:27.396877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.243 [2024-07-10 14:43:27.396929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.396958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.243 [2024-07-10 14:43:27.396974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.243 [2024-07-10 14:43:27.397219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.397976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.397995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:27.398027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:27.398043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:44.249765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:44.249851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:35.243 [2024-07-10 14:43:44.249911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.243 [2024-07-10 14:43:44.249938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.249962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.249979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.250018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.250055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.250125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.250507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.250549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.250587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.250625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.250663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.250700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.250737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.250774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.250811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.250848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.250885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.250938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.250977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.250999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.251015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.251037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.251053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.251075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.251092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.252556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.252588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.252617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.252635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.252658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.252674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.252696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.252712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.252733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.252749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.252771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.252794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.252816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.252832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.252881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.252901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.252923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.252939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.252960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.252976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.252997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.253013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.253035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.253051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.253073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.253089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.253111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.244 [2024-07-10 14:43:44.253127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.253706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.253736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:35.244 [2024-07-10 14:43:44.253765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.244 [2024-07-10 14:43:44.253783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:35.245 [2024-07-10 14:43:44.253805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.245 [2024-07-10 14:43:44.253821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:35.245 [2024-07-10 14:43:44.253843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.245 [2024-07-10 14:43:44.253859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:35.245 [2024-07-10 14:43:44.253881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.245 [2024-07-10 14:43:44.253896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:35.245 [2024-07-10 14:43:44.253918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.245 [2024-07-10 14:43:44.253946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:35.245 [2024-07-10 14:43:44.253970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.245 [2024-07-10 14:43:44.253986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:35.245 [2024-07-10 14:43:44.254009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.245 [2024-07-10 14:43:44.254024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.245 [2024-07-10 14:43:44.254046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.245 [2024-07-10 14:43:44.254061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:35.245 [2024-07-10 14:43:44.254082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.245 [2024-07-10 14:43:44.254098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.245 [2024-07-10 14:43:44.254120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.245 [2024-07-10 14:43:44.254136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:35.245 Received shutdown signal, test time was about 34.588253 seconds 00:25:35.245 00:25:35.245 Latency(us) 00:25:35.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.245 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:35.245 Verification LBA range: start 0x0 length 0x4000 00:25:35.245 Nvme0n1 : 34.59 8294.72 32.40 0.00 0.00 15400.26 558.55 4026531.84 00:25:35.245 =================================================================================================================== 00:25:35.245 Total : 8294.72 32.40 0.00 0.00 15400.26 558.55 4026531.84 00:25:35.245 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.504 rmmod nvme_tcp 00:25:35.504 rmmod nvme_fabrics 00:25:35.504 rmmod nvme_keyring 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 108511 ']' 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 108511 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 108511 ']' 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 108511 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108511 00:25:35.504 killing process with pid 108511 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108511' 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 108511 00:25:35.504 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 108511 00:25:35.763 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:35.764 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:35.764 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:35.764 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:35.764 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:35.764 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.764 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.764 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.764 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:35.764 ************************************ 00:25:35.764 END TEST nvmf_host_multipath_status 00:25:35.764 ************************************ 00:25:35.764 00:25:35.764 real 0m39.424s 00:25:35.764 user 2m10.492s 00:25:35.764 sys 0m9.703s 00:25:35.764 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:35.764 14:43:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:35.764 14:43:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:35.764 14:43:47 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:35.764 14:43:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:35.764 14:43:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:35.764 14:43:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:35.764 ************************************ 00:25:35.764 START TEST nvmf_discovery_remove_ifc 00:25:35.764 ************************************ 00:25:35.764 14:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:35.764 * Looking for test storage... 00:25:35.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:35.764 14:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:35.764 14:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:35.764 14:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.764 14:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.764 14:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.764 14:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.764 14:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.764 14:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.764 14:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.764 14:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.764 14:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.764 14:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:35.764 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:36.023 Cannot find device "nvmf_tgt_br" 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:36.023 Cannot find device "nvmf_tgt_br2" 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:36.023 Cannot find device "nvmf_tgt_br" 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:36.023 Cannot find device "nvmf_tgt_br2" 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:36.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:36.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:36.023 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:36.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:25:36.282 00:25:36.282 --- 10.0.0.2 ping statistics --- 00:25:36.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.282 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:36.282 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:36.282 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:25:36.282 00:25:36.282 --- 10.0.0.3 ping statistics --- 00:25:36.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.282 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:36.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:25:36.282 00:25:36.282 --- 10.0.0.1 ping statistics --- 00:25:36.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.282 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=109884 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 109884 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 109884 ']' 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.282 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.283 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.283 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.283 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:36.283 [2024-07-10 14:43:48.432558] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:25:36.283 [2024-07-10 14:43:48.432675] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.283 [2024-07-10 14:43:48.554985] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:36.541 [2024-07-10 14:43:48.574625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.541 [2024-07-10 14:43:48.615143] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.541 [2024-07-10 14:43:48.615201] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.541 [2024-07-10 14:43:48.615214] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.541 [2024-07-10 14:43:48.615223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.541 [2024-07-10 14:43:48.615232] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.541 [2024-07-10 14:43:48.615259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:36.541 [2024-07-10 14:43:48.758678] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.541 [2024-07-10 14:43:48.766788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:36.541 null0 00:25:36.541 [2024-07-10 14:43:48.798812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=109922 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 109922 /tmp/host.sock 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 109922 ']' 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.541 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.541 14:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:36.799 [2024-07-10 14:43:48.875629] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:25:36.799 [2024-07-10 14:43:48.875707] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109922 ] 00:25:36.799 [2024-07-10 14:43:48.993687] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:36.799 [2024-07-10 14:43:49.011932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.799 [2024-07-10 14:43:49.047512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.057 14:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.991 [2024-07-10 14:43:50.221851] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:37.992 [2024-07-10 14:43:50.221888] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:37.992 [2024-07-10 14:43:50.221908] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:38.249 [2024-07-10 14:43:50.307983] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:38.249 [2024-07-10 14:43:50.365823] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:38.249 [2024-07-10 14:43:50.365922] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:38.249 [2024-07-10 14:43:50.365969] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:38.249 [2024-07-10 14:43:50.365995] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:38.249 [2024-07-10 14:43:50.366031] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:38.249 [2024-07-10 14:43:50.370316] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c333d0 was disconnected and freed. delete nvme_qpair. 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:38.249 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.250 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:38.250 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.250 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:38.250 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:38.250 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:38.250 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.250 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:38.250 14:43:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:39.620 14:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:39.620 14:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:39.620 14:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.620 14:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.620 14:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:39.620 14:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:39.620 14:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:39.620 14:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.620 14:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:39.620 14:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:40.560 14:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:40.560 14:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.560 14:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:40.560 14:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.560 14:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.560 14:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:40.560 14:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:40.560 14:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.560 14:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:40.560 14:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:41.493 14:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:41.493 14:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.493 14:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.493 14:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.493 14:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:41.493 14:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:41.493 14:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:41.493 14:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.493 14:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:41.493 14:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:42.428 14:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.428 14:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.428 14:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.428 14:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.428 14:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.428 14:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.428 14:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.428 14:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.686 14:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:42.686 14:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:43.621 14:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:43.621 14:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.621 14:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.621 14:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.622 14:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:43.622 14:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:43.622 14:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:43.622 14:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.622 14:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:43.622 14:43:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:43.622 [2024-07-10 14:43:55.793213] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:43.622 [2024-07-10 14:43:55.793273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.622 [2024-07-10 14:43:55.793302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.622 [2024-07-10 14:43:55.793316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.622 [2024-07-10 14:43:55.793326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.622 [2024-07-10 14:43:55.793336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.622 [2024-07-10 14:43:55.793345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.622 [2024-07-10 14:43:55.793355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.622 [2024-07-10 14:43:55.793364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.622 [2024-07-10 14:43:55.793374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.622 [2024-07-10 14:43:55.793383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.622 [2024-07-10 14:43:55.793393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa090 is same with the state(5) to be set 00:25:43.622 [2024-07-10 14:43:55.803205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfa090 (9): Bad file descriptor 00:25:43.622 [2024-07-10 14:43:55.813220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:44.558 14:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.558 14:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.558 14:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.558 14:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.558 14:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.558 14:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.558 14:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.818 [2024-07-10 14:43:56.869365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:44.818 [2024-07-10 14:43:56.869463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfa090 with addr=10.0.0.2, port=4420 00:25:44.818 [2024-07-10 14:43:56.869487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa090 is same with the state(5) to be set 00:25:44.818 [2024-07-10 14:43:56.869538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfa090 (9): Bad file descriptor 00:25:44.818 [2024-07-10 14:43:56.870185] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:44.818 [2024-07-10 14:43:56.870226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:44.818 [2024-07-10 14:43:56.870243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:44.818 [2024-07-10 14:43:56.870261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:44.818 [2024-07-10 14:43:56.870340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.818 [2024-07-10 14:43:56.870375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:44.818 14:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.818 14:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:44.818 14:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:45.753 [2024-07-10 14:43:57.870452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.753 [2024-07-10 14:43:57.870521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.753 [2024-07-10 14:43:57.870534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.753 [2024-07-10 14:43:57.870545] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:45.753 [2024-07-10 14:43:57.870569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.753 [2024-07-10 14:43:57.870601] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:45.753 [2024-07-10 14:43:57.870666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.753 [2024-07-10 14:43:57.870683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.753 [2024-07-10 14:43:57.870697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.753 [2024-07-10 14:43:57.870706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.753 [2024-07-10 14:43:57.870717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.753 [2024-07-10 14:43:57.870726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.753 [2024-07-10 14:43:57.870737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.753 [2024-07-10 14:43:57.870746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.753 [2024-07-10 14:43:57.870756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.753 [2024-07-10 14:43:57.870765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.753 [2024-07-10 14:43:57.870774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:45.753 [2024-07-10 14:43:57.870815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf9550 (9): Bad file descriptor 00:25:45.753 [2024-07-10 14:43:57.871805] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:45.753 [2024-07-10 14:43:57.871831] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:45.753 14:43:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.753 14:43:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:45.753 14:43:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:47.130 14:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:47.130 14:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.130 14:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:47.130 14:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.130 14:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:47.130 14:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:47.130 14:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:47.130 14:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.130 14:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:47.130 14:43:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:47.696 [2024-07-10 14:43:59.883641] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:47.696 [2024-07-10 14:43:59.883681] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:47.696 [2024-07-10 14:43:59.883700] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:47.696 [2024-07-10 14:43:59.969768] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:47.956 [2024-07-10 14:44:00.025880] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:47.956 [2024-07-10 14:44:00.025946] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:47.956 [2024-07-10 14:44:00.025971] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:47.956 [2024-07-10 14:44:00.025988] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:47.956 [2024-07-10 14:44:00.025998] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:47.956 [2024-07-10 14:44:00.032192] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1bea230 was disconnected and freed. delete nvme_qpair. 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 109922 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 109922 ']' 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 109922 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109922 00:25:47.956 killing process with pid 109922 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109922' 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 109922 00:25:47.956 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 109922 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:48.215 rmmod nvme_tcp 00:25:48.215 rmmod nvme_fabrics 00:25:48.215 rmmod nvme_keyring 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 109884 ']' 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 109884 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 109884 ']' 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 109884 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109884 00:25:48.215 killing process with pid 109884 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109884' 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 109884 00:25:48.215 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 109884 00:25:48.474 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:48.474 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:48.474 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:48.474 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:48.474 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:48.474 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.474 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.474 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.474 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:48.474 00:25:48.474 real 0m12.677s 00:25:48.474 user 0m22.932s 00:25:48.474 sys 0m1.416s 00:25:48.474 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:48.474 14:44:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.474 ************************************ 00:25:48.474 END TEST nvmf_discovery_remove_ifc 00:25:48.474 ************************************ 00:25:48.474 14:44:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:48.474 14:44:00 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:48.474 14:44:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:48.474 14:44:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:48.474 14:44:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:48.474 ************************************ 00:25:48.474 START TEST nvmf_identify_kernel_target 00:25:48.474 ************************************ 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:48.474 * Looking for test storage... 00:25:48.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.474 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:48.475 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:48.733 Cannot find device "nvmf_tgt_br" 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:48.733 Cannot find device "nvmf_tgt_br2" 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:48.733 Cannot find device "nvmf_tgt_br" 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:48.733 Cannot find device "nvmf_tgt_br2" 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:48.733 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:48.733 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:48.733 14:44:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:48.733 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:48.733 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:48.733 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:49.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:25:49.015 00:25:49.015 --- 10.0.0.2 ping statistics --- 00:25:49.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.015 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:49.015 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:49.015 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:25:49.015 00:25:49.015 --- 10.0.0.3 ping statistics --- 00:25:49.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.015 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:49.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:25:49.015 00:25:49.015 --- 10.0.0.1 ping statistics --- 00:25:49.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.015 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.015 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.016 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:49.016 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:49.016 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:49.016 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:49.016 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:49.016 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:49.016 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:49.016 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:49.016 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:49.016 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:49.016 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:49.016 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:49.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:49.274 Waiting for block devices as requested 00:25:49.274 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:49.532 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:49.532 No valid GPT data, bailing 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:49.532 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:49.790 No valid GPT data, bailing 00:25:49.790 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:49.790 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:49.790 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:49.790 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:25:49.790 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:49.790 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:49.790 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:25:49.790 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:25:49.790 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:49.790 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:49.790 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:25:49.790 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:49.791 No valid GPT data, bailing 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:49.791 No valid GPT data, bailing 00:25:49.791 14:44:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -a 10.0.0.1 -t tcp -s 4420 00:25:49.791 00:25:49.791 Discovery Log Number of Records 2, Generation counter 2 00:25:49.791 =====Discovery Log Entry 0====== 00:25:49.791 trtype: tcp 00:25:49.791 adrfam: ipv4 00:25:49.791 subtype: current discovery subsystem 00:25:49.791 treq: not specified, sq flow control disable supported 00:25:49.791 portid: 1 00:25:49.791 trsvcid: 4420 00:25:49.791 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:49.791 traddr: 10.0.0.1 00:25:49.791 eflags: none 00:25:49.791 sectype: none 00:25:49.791 =====Discovery Log Entry 1====== 00:25:49.791 trtype: tcp 00:25:49.791 adrfam: ipv4 00:25:49.791 subtype: nvme subsystem 00:25:49.791 treq: not specified, sq flow control disable supported 00:25:49.791 portid: 1 00:25:49.791 trsvcid: 4420 00:25:49.791 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:49.791 traddr: 10.0.0.1 00:25:49.791 eflags: none 00:25:49.791 sectype: none 00:25:49.791 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:49.791 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:50.050 ===================================================== 00:25:50.050 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:50.050 ===================================================== 00:25:50.050 Controller Capabilities/Features 00:25:50.050 ================================ 00:25:50.050 Vendor ID: 0000 00:25:50.050 Subsystem Vendor ID: 0000 00:25:50.050 Serial Number: 04382997bd54f6519853 00:25:50.050 Model Number: Linux 00:25:50.050 Firmware Version: 6.7.0-68 00:25:50.050 Recommended Arb Burst: 0 00:25:50.050 IEEE OUI Identifier: 00 00 00 00:25:50.050 Multi-path I/O 00:25:50.050 May have multiple subsystem ports: No 00:25:50.050 May have multiple controllers: No 00:25:50.050 Associated with SR-IOV VF: No 00:25:50.050 Max Data Transfer Size: Unlimited 00:25:50.050 Max Number of Namespaces: 0 00:25:50.050 Max Number of I/O Queues: 1024 00:25:50.050 NVMe Specification Version (VS): 1.3 00:25:50.050 NVMe Specification Version (Identify): 1.3 00:25:50.050 Maximum Queue Entries: 1024 00:25:50.050 Contiguous Queues Required: No 00:25:50.050 Arbitration Mechanisms Supported 00:25:50.050 Weighted Round Robin: Not Supported 00:25:50.050 Vendor Specific: Not Supported 00:25:50.050 Reset Timeout: 7500 ms 00:25:50.050 Doorbell Stride: 4 bytes 00:25:50.050 NVM Subsystem Reset: Not Supported 00:25:50.050 Command Sets Supported 00:25:50.050 NVM Command Set: Supported 00:25:50.050 Boot Partition: Not Supported 00:25:50.050 Memory Page Size Minimum: 4096 bytes 00:25:50.050 Memory Page Size Maximum: 4096 bytes 00:25:50.050 Persistent Memory Region: Not Supported 00:25:50.050 Optional Asynchronous Events Supported 00:25:50.050 Namespace Attribute Notices: Not Supported 00:25:50.050 Firmware Activation Notices: Not Supported 00:25:50.050 ANA Change Notices: Not Supported 00:25:50.050 PLE Aggregate Log Change Notices: Not Supported 00:25:50.050 LBA Status Info Alert Notices: Not Supported 00:25:50.050 EGE Aggregate Log Change Notices: Not Supported 00:25:50.050 Normal NVM Subsystem Shutdown event: Not Supported 00:25:50.050 Zone Descriptor Change Notices: Not Supported 00:25:50.050 Discovery Log Change Notices: Supported 00:25:50.050 Controller Attributes 00:25:50.050 128-bit Host Identifier: Not Supported 00:25:50.050 Non-Operational Permissive Mode: Not Supported 00:25:50.050 NVM Sets: Not Supported 00:25:50.050 Read Recovery Levels: Not Supported 00:25:50.050 Endurance Groups: Not Supported 00:25:50.050 Predictable Latency Mode: Not Supported 00:25:50.050 Traffic Based Keep ALive: Not Supported 00:25:50.050 Namespace Granularity: Not Supported 00:25:50.050 SQ Associations: Not Supported 00:25:50.050 UUID List: Not Supported 00:25:50.050 Multi-Domain Subsystem: Not Supported 00:25:50.050 Fixed Capacity Management: Not Supported 00:25:50.050 Variable Capacity Management: Not Supported 00:25:50.050 Delete Endurance Group: Not Supported 00:25:50.050 Delete NVM Set: Not Supported 00:25:50.050 Extended LBA Formats Supported: Not Supported 00:25:50.050 Flexible Data Placement Supported: Not Supported 00:25:50.050 00:25:50.050 Controller Memory Buffer Support 00:25:50.050 ================================ 00:25:50.050 Supported: No 00:25:50.050 00:25:50.050 Persistent Memory Region Support 00:25:50.050 ================================ 00:25:50.050 Supported: No 00:25:50.050 00:25:50.050 Admin Command Set Attributes 00:25:50.050 ============================ 00:25:50.050 Security Send/Receive: Not Supported 00:25:50.050 Format NVM: Not Supported 00:25:50.050 Firmware Activate/Download: Not Supported 00:25:50.050 Namespace Management: Not Supported 00:25:50.050 Device Self-Test: Not Supported 00:25:50.050 Directives: Not Supported 00:25:50.050 NVMe-MI: Not Supported 00:25:50.050 Virtualization Management: Not Supported 00:25:50.050 Doorbell Buffer Config: Not Supported 00:25:50.050 Get LBA Status Capability: Not Supported 00:25:50.050 Command & Feature Lockdown Capability: Not Supported 00:25:50.050 Abort Command Limit: 1 00:25:50.050 Async Event Request Limit: 1 00:25:50.050 Number of Firmware Slots: N/A 00:25:50.050 Firmware Slot 1 Read-Only: N/A 00:25:50.050 Firmware Activation Without Reset: N/A 00:25:50.050 Multiple Update Detection Support: N/A 00:25:50.050 Firmware Update Granularity: No Information Provided 00:25:50.050 Per-Namespace SMART Log: No 00:25:50.050 Asymmetric Namespace Access Log Page: Not Supported 00:25:50.050 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:50.050 Command Effects Log Page: Not Supported 00:25:50.050 Get Log Page Extended Data: Supported 00:25:50.050 Telemetry Log Pages: Not Supported 00:25:50.050 Persistent Event Log Pages: Not Supported 00:25:50.050 Supported Log Pages Log Page: May Support 00:25:50.050 Commands Supported & Effects Log Page: Not Supported 00:25:50.050 Feature Identifiers & Effects Log Page:May Support 00:25:50.050 NVMe-MI Commands & Effects Log Page: May Support 00:25:50.050 Data Area 4 for Telemetry Log: Not Supported 00:25:50.050 Error Log Page Entries Supported: 1 00:25:50.050 Keep Alive: Not Supported 00:25:50.050 00:25:50.050 NVM Command Set Attributes 00:25:50.050 ========================== 00:25:50.050 Submission Queue Entry Size 00:25:50.050 Max: 1 00:25:50.050 Min: 1 00:25:50.050 Completion Queue Entry Size 00:25:50.050 Max: 1 00:25:50.050 Min: 1 00:25:50.050 Number of Namespaces: 0 00:25:50.050 Compare Command: Not Supported 00:25:50.050 Write Uncorrectable Command: Not Supported 00:25:50.050 Dataset Management Command: Not Supported 00:25:50.050 Write Zeroes Command: Not Supported 00:25:50.050 Set Features Save Field: Not Supported 00:25:50.050 Reservations: Not Supported 00:25:50.050 Timestamp: Not Supported 00:25:50.050 Copy: Not Supported 00:25:50.050 Volatile Write Cache: Not Present 00:25:50.050 Atomic Write Unit (Normal): 1 00:25:50.050 Atomic Write Unit (PFail): 1 00:25:50.050 Atomic Compare & Write Unit: 1 00:25:50.050 Fused Compare & Write: Not Supported 00:25:50.050 Scatter-Gather List 00:25:50.050 SGL Command Set: Supported 00:25:50.050 SGL Keyed: Not Supported 00:25:50.050 SGL Bit Bucket Descriptor: Not Supported 00:25:50.050 SGL Metadata Pointer: Not Supported 00:25:50.050 Oversized SGL: Not Supported 00:25:50.050 SGL Metadata Address: Not Supported 00:25:50.050 SGL Offset: Supported 00:25:50.050 Transport SGL Data Block: Not Supported 00:25:50.050 Replay Protected Memory Block: Not Supported 00:25:50.050 00:25:50.050 Firmware Slot Information 00:25:50.050 ========================= 00:25:50.050 Active slot: 0 00:25:50.050 00:25:50.050 00:25:50.050 Error Log 00:25:50.050 ========= 00:25:50.050 00:25:50.050 Active Namespaces 00:25:50.050 ================= 00:25:50.050 Discovery Log Page 00:25:50.050 ================== 00:25:50.050 Generation Counter: 2 00:25:50.050 Number of Records: 2 00:25:50.050 Record Format: 0 00:25:50.050 00:25:50.050 Discovery Log Entry 0 00:25:50.050 ---------------------- 00:25:50.050 Transport Type: 3 (TCP) 00:25:50.050 Address Family: 1 (IPv4) 00:25:50.050 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:50.050 Entry Flags: 00:25:50.050 Duplicate Returned Information: 0 00:25:50.050 Explicit Persistent Connection Support for Discovery: 0 00:25:50.050 Transport Requirements: 00:25:50.050 Secure Channel: Not Specified 00:25:50.050 Port ID: 1 (0x0001) 00:25:50.050 Controller ID: 65535 (0xffff) 00:25:50.050 Admin Max SQ Size: 32 00:25:50.050 Transport Service Identifier: 4420 00:25:50.050 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:50.050 Transport Address: 10.0.0.1 00:25:50.050 Discovery Log Entry 1 00:25:50.050 ---------------------- 00:25:50.050 Transport Type: 3 (TCP) 00:25:50.050 Address Family: 1 (IPv4) 00:25:50.050 Subsystem Type: 2 (NVM Subsystem) 00:25:50.050 Entry Flags: 00:25:50.050 Duplicate Returned Information: 0 00:25:50.050 Explicit Persistent Connection Support for Discovery: 0 00:25:50.050 Transport Requirements: 00:25:50.050 Secure Channel: Not Specified 00:25:50.050 Port ID: 1 (0x0001) 00:25:50.050 Controller ID: 65535 (0xffff) 00:25:50.050 Admin Max SQ Size: 32 00:25:50.050 Transport Service Identifier: 4420 00:25:50.050 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:50.050 Transport Address: 10.0.0.1 00:25:50.050 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:50.310 get_feature(0x01) failed 00:25:50.310 get_feature(0x02) failed 00:25:50.310 get_feature(0x04) failed 00:25:50.310 ===================================================== 00:25:50.310 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:50.310 ===================================================== 00:25:50.310 Controller Capabilities/Features 00:25:50.310 ================================ 00:25:50.310 Vendor ID: 0000 00:25:50.310 Subsystem Vendor ID: 0000 00:25:50.310 Serial Number: 099e8565cc2784309f32 00:25:50.310 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:50.310 Firmware Version: 6.7.0-68 00:25:50.310 Recommended Arb Burst: 6 00:25:50.310 IEEE OUI Identifier: 00 00 00 00:25:50.310 Multi-path I/O 00:25:50.310 May have multiple subsystem ports: Yes 00:25:50.310 May have multiple controllers: Yes 00:25:50.310 Associated with SR-IOV VF: No 00:25:50.310 Max Data Transfer Size: Unlimited 00:25:50.310 Max Number of Namespaces: 1024 00:25:50.310 Max Number of I/O Queues: 128 00:25:50.310 NVMe Specification Version (VS): 1.3 00:25:50.310 NVMe Specification Version (Identify): 1.3 00:25:50.310 Maximum Queue Entries: 1024 00:25:50.310 Contiguous Queues Required: No 00:25:50.310 Arbitration Mechanisms Supported 00:25:50.310 Weighted Round Robin: Not Supported 00:25:50.310 Vendor Specific: Not Supported 00:25:50.310 Reset Timeout: 7500 ms 00:25:50.310 Doorbell Stride: 4 bytes 00:25:50.310 NVM Subsystem Reset: Not Supported 00:25:50.310 Command Sets Supported 00:25:50.310 NVM Command Set: Supported 00:25:50.310 Boot Partition: Not Supported 00:25:50.310 Memory Page Size Minimum: 4096 bytes 00:25:50.310 Memory Page Size Maximum: 4096 bytes 00:25:50.310 Persistent Memory Region: Not Supported 00:25:50.310 Optional Asynchronous Events Supported 00:25:50.310 Namespace Attribute Notices: Supported 00:25:50.310 Firmware Activation Notices: Not Supported 00:25:50.310 ANA Change Notices: Supported 00:25:50.310 PLE Aggregate Log Change Notices: Not Supported 00:25:50.310 LBA Status Info Alert Notices: Not Supported 00:25:50.310 EGE Aggregate Log Change Notices: Not Supported 00:25:50.310 Normal NVM Subsystem Shutdown event: Not Supported 00:25:50.310 Zone Descriptor Change Notices: Not Supported 00:25:50.310 Discovery Log Change Notices: Not Supported 00:25:50.310 Controller Attributes 00:25:50.310 128-bit Host Identifier: Supported 00:25:50.310 Non-Operational Permissive Mode: Not Supported 00:25:50.310 NVM Sets: Not Supported 00:25:50.310 Read Recovery Levels: Not Supported 00:25:50.310 Endurance Groups: Not Supported 00:25:50.311 Predictable Latency Mode: Not Supported 00:25:50.311 Traffic Based Keep ALive: Supported 00:25:50.311 Namespace Granularity: Not Supported 00:25:50.311 SQ Associations: Not Supported 00:25:50.311 UUID List: Not Supported 00:25:50.311 Multi-Domain Subsystem: Not Supported 00:25:50.311 Fixed Capacity Management: Not Supported 00:25:50.311 Variable Capacity Management: Not Supported 00:25:50.311 Delete Endurance Group: Not Supported 00:25:50.311 Delete NVM Set: Not Supported 00:25:50.311 Extended LBA Formats Supported: Not Supported 00:25:50.311 Flexible Data Placement Supported: Not Supported 00:25:50.311 00:25:50.311 Controller Memory Buffer Support 00:25:50.311 ================================ 00:25:50.311 Supported: No 00:25:50.311 00:25:50.311 Persistent Memory Region Support 00:25:50.311 ================================ 00:25:50.311 Supported: No 00:25:50.311 00:25:50.311 Admin Command Set Attributes 00:25:50.311 ============================ 00:25:50.311 Security Send/Receive: Not Supported 00:25:50.311 Format NVM: Not Supported 00:25:50.311 Firmware Activate/Download: Not Supported 00:25:50.311 Namespace Management: Not Supported 00:25:50.311 Device Self-Test: Not Supported 00:25:50.311 Directives: Not Supported 00:25:50.311 NVMe-MI: Not Supported 00:25:50.311 Virtualization Management: Not Supported 00:25:50.311 Doorbell Buffer Config: Not Supported 00:25:50.311 Get LBA Status Capability: Not Supported 00:25:50.311 Command & Feature Lockdown Capability: Not Supported 00:25:50.311 Abort Command Limit: 4 00:25:50.311 Async Event Request Limit: 4 00:25:50.311 Number of Firmware Slots: N/A 00:25:50.311 Firmware Slot 1 Read-Only: N/A 00:25:50.311 Firmware Activation Without Reset: N/A 00:25:50.311 Multiple Update Detection Support: N/A 00:25:50.311 Firmware Update Granularity: No Information Provided 00:25:50.311 Per-Namespace SMART Log: Yes 00:25:50.311 Asymmetric Namespace Access Log Page: Supported 00:25:50.311 ANA Transition Time : 10 sec 00:25:50.311 00:25:50.311 Asymmetric Namespace Access Capabilities 00:25:50.311 ANA Optimized State : Supported 00:25:50.311 ANA Non-Optimized State : Supported 00:25:50.311 ANA Inaccessible State : Supported 00:25:50.311 ANA Persistent Loss State : Supported 00:25:50.311 ANA Change State : Supported 00:25:50.311 ANAGRPID is not changed : No 00:25:50.311 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:50.311 00:25:50.311 ANA Group Identifier Maximum : 128 00:25:50.311 Number of ANA Group Identifiers : 128 00:25:50.311 Max Number of Allowed Namespaces : 1024 00:25:50.311 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:50.311 Command Effects Log Page: Supported 00:25:50.311 Get Log Page Extended Data: Supported 00:25:50.311 Telemetry Log Pages: Not Supported 00:25:50.311 Persistent Event Log Pages: Not Supported 00:25:50.311 Supported Log Pages Log Page: May Support 00:25:50.311 Commands Supported & Effects Log Page: Not Supported 00:25:50.311 Feature Identifiers & Effects Log Page:May Support 00:25:50.311 NVMe-MI Commands & Effects Log Page: May Support 00:25:50.311 Data Area 4 for Telemetry Log: Not Supported 00:25:50.311 Error Log Page Entries Supported: 128 00:25:50.311 Keep Alive: Supported 00:25:50.311 Keep Alive Granularity: 1000 ms 00:25:50.311 00:25:50.311 NVM Command Set Attributes 00:25:50.311 ========================== 00:25:50.311 Submission Queue Entry Size 00:25:50.311 Max: 64 00:25:50.311 Min: 64 00:25:50.311 Completion Queue Entry Size 00:25:50.311 Max: 16 00:25:50.311 Min: 16 00:25:50.311 Number of Namespaces: 1024 00:25:50.311 Compare Command: Not Supported 00:25:50.311 Write Uncorrectable Command: Not Supported 00:25:50.311 Dataset Management Command: Supported 00:25:50.311 Write Zeroes Command: Supported 00:25:50.311 Set Features Save Field: Not Supported 00:25:50.311 Reservations: Not Supported 00:25:50.311 Timestamp: Not Supported 00:25:50.311 Copy: Not Supported 00:25:50.311 Volatile Write Cache: Present 00:25:50.311 Atomic Write Unit (Normal): 1 00:25:50.311 Atomic Write Unit (PFail): 1 00:25:50.311 Atomic Compare & Write Unit: 1 00:25:50.311 Fused Compare & Write: Not Supported 00:25:50.311 Scatter-Gather List 00:25:50.311 SGL Command Set: Supported 00:25:50.311 SGL Keyed: Not Supported 00:25:50.311 SGL Bit Bucket Descriptor: Not Supported 00:25:50.311 SGL Metadata Pointer: Not Supported 00:25:50.311 Oversized SGL: Not Supported 00:25:50.311 SGL Metadata Address: Not Supported 00:25:50.311 SGL Offset: Supported 00:25:50.311 Transport SGL Data Block: Not Supported 00:25:50.311 Replay Protected Memory Block: Not Supported 00:25:50.311 00:25:50.311 Firmware Slot Information 00:25:50.311 ========================= 00:25:50.311 Active slot: 0 00:25:50.311 00:25:50.311 Asymmetric Namespace Access 00:25:50.311 =========================== 00:25:50.311 Change Count : 0 00:25:50.311 Number of ANA Group Descriptors : 1 00:25:50.311 ANA Group Descriptor : 0 00:25:50.311 ANA Group ID : 1 00:25:50.311 Number of NSID Values : 1 00:25:50.311 Change Count : 0 00:25:50.311 ANA State : 1 00:25:50.311 Namespace Identifier : 1 00:25:50.311 00:25:50.311 Commands Supported and Effects 00:25:50.311 ============================== 00:25:50.311 Admin Commands 00:25:50.311 -------------- 00:25:50.311 Get Log Page (02h): Supported 00:25:50.311 Identify (06h): Supported 00:25:50.311 Abort (08h): Supported 00:25:50.311 Set Features (09h): Supported 00:25:50.311 Get Features (0Ah): Supported 00:25:50.311 Asynchronous Event Request (0Ch): Supported 00:25:50.311 Keep Alive (18h): Supported 00:25:50.311 I/O Commands 00:25:50.311 ------------ 00:25:50.311 Flush (00h): Supported 00:25:50.311 Write (01h): Supported LBA-Change 00:25:50.311 Read (02h): Supported 00:25:50.311 Write Zeroes (08h): Supported LBA-Change 00:25:50.311 Dataset Management (09h): Supported 00:25:50.311 00:25:50.311 Error Log 00:25:50.311 ========= 00:25:50.311 Entry: 0 00:25:50.311 Error Count: 0x3 00:25:50.311 Submission Queue Id: 0x0 00:25:50.311 Command Id: 0x5 00:25:50.311 Phase Bit: 0 00:25:50.311 Status Code: 0x2 00:25:50.311 Status Code Type: 0x0 00:25:50.311 Do Not Retry: 1 00:25:50.311 Error Location: 0x28 00:25:50.311 LBA: 0x0 00:25:50.311 Namespace: 0x0 00:25:50.311 Vendor Log Page: 0x0 00:25:50.311 ----------- 00:25:50.311 Entry: 1 00:25:50.311 Error Count: 0x2 00:25:50.311 Submission Queue Id: 0x0 00:25:50.311 Command Id: 0x5 00:25:50.311 Phase Bit: 0 00:25:50.311 Status Code: 0x2 00:25:50.311 Status Code Type: 0x0 00:25:50.311 Do Not Retry: 1 00:25:50.311 Error Location: 0x28 00:25:50.311 LBA: 0x0 00:25:50.311 Namespace: 0x0 00:25:50.312 Vendor Log Page: 0x0 00:25:50.312 ----------- 00:25:50.312 Entry: 2 00:25:50.312 Error Count: 0x1 00:25:50.312 Submission Queue Id: 0x0 00:25:50.312 Command Id: 0x4 00:25:50.312 Phase Bit: 0 00:25:50.312 Status Code: 0x2 00:25:50.312 Status Code Type: 0x0 00:25:50.312 Do Not Retry: 1 00:25:50.312 Error Location: 0x28 00:25:50.312 LBA: 0x0 00:25:50.312 Namespace: 0x0 00:25:50.312 Vendor Log Page: 0x0 00:25:50.312 00:25:50.312 Number of Queues 00:25:50.312 ================ 00:25:50.312 Number of I/O Submission Queues: 128 00:25:50.312 Number of I/O Completion Queues: 128 00:25:50.312 00:25:50.312 ZNS Specific Controller Data 00:25:50.312 ============================ 00:25:50.312 Zone Append Size Limit: 0 00:25:50.312 00:25:50.312 00:25:50.312 Active Namespaces 00:25:50.312 ================= 00:25:50.312 get_feature(0x05) failed 00:25:50.312 Namespace ID:1 00:25:50.312 Command Set Identifier: NVM (00h) 00:25:50.312 Deallocate: Supported 00:25:50.312 Deallocated/Unwritten Error: Not Supported 00:25:50.312 Deallocated Read Value: Unknown 00:25:50.312 Deallocate in Write Zeroes: Not Supported 00:25:50.312 Deallocated Guard Field: 0xFFFF 00:25:50.312 Flush: Supported 00:25:50.312 Reservation: Not Supported 00:25:50.312 Namespace Sharing Capabilities: Multiple Controllers 00:25:50.312 Size (in LBAs): 1310720 (5GiB) 00:25:50.312 Capacity (in LBAs): 1310720 (5GiB) 00:25:50.312 Utilization (in LBAs): 1310720 (5GiB) 00:25:50.312 UUID: 39fd73cc-dee8-44ea-8e60-8bdb59879ec6 00:25:50.312 Thin Provisioning: Not Supported 00:25:50.312 Per-NS Atomic Units: Yes 00:25:50.312 Atomic Boundary Size (Normal): 0 00:25:50.312 Atomic Boundary Size (PFail): 0 00:25:50.312 Atomic Boundary Offset: 0 00:25:50.312 NGUID/EUI64 Never Reused: No 00:25:50.312 ANA group ID: 1 00:25:50.312 Namespace Write Protected: No 00:25:50.312 Number of LBA Formats: 1 00:25:50.312 Current LBA Format: LBA Format #00 00:25:50.312 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:25:50.312 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.312 rmmod nvme_tcp 00:25:50.312 rmmod nvme_fabrics 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:50.312 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:50.570 14:44:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:51.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:51.206 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:51.206 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:51.465 ************************************ 00:25:51.465 END TEST nvmf_identify_kernel_target 00:25:51.465 ************************************ 00:25:51.465 00:25:51.465 real 0m2.877s 00:25:51.465 user 0m1.050s 00:25:51.465 sys 0m1.327s 00:25:51.465 14:44:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:51.465 14:44:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.465 14:44:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:51.465 14:44:03 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:51.465 14:44:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:51.465 14:44:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.465 14:44:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:51.465 ************************************ 00:25:51.465 START TEST nvmf_auth_host 00:25:51.465 ************************************ 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:51.465 * Looking for test storage... 00:25:51.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:51.465 Cannot find device "nvmf_tgt_br" 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:51.465 Cannot find device "nvmf_tgt_br2" 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:51.465 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:51.725 Cannot find device "nvmf_tgt_br" 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:51.725 Cannot find device "nvmf_tgt_br2" 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:51.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:51.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:51.725 14:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:51.725 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:51.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:25:51.725 00:25:51.725 --- 10.0.0.2 ping statistics --- 00:25:51.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.725 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:25:51.725 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:51.725 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:51.725 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:25:51.725 00:25:51.725 --- 10.0.0.3 ping statistics --- 00:25:51.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.725 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:25:51.725 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:51.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:25:51.725 00:25:51.725 --- 10.0.0.1 ping statistics --- 00:25:51.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.725 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:25:51.725 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.725 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:25:51.725 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:51.725 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.725 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:51.725 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:51.725 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.725 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:51.987 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:51.987 14:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:51.987 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:51.987 14:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:51.988 14:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.988 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=110796 00:25:51.988 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:51.988 14:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 110796 00:25:51.988 14:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 110796 ']' 00:25:51.988 14:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.988 14:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:51.988 14:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.988 14:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:51.988 14:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5d766c3e19fa229b6a68ccba39ecd946 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hu1 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5d766c3e19fa229b6a68ccba39ecd946 0 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5d766c3e19fa229b6a68ccba39ecd946 0 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5d766c3e19fa229b6a68ccba39ecd946 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hu1 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hu1 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.hu1 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9e186a7831c35fca80b6b6162b0a54605c6b8549fd98e81648353903f3f586f1 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.22B 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9e186a7831c35fca80b6b6162b0a54605c6b8549fd98e81648353903f3f586f1 3 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9e186a7831c35fca80b6b6162b0a54605c6b8549fd98e81648353903f3f586f1 3 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9e186a7831c35fca80b6b6162b0a54605c6b8549fd98e81648353903f3f586f1 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:52.923 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.22B 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.22B 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.22B 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c42bc1e1024fe5bf662ec2d1d34b06a2c7fe94256e623d29 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.SCz 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c42bc1e1024fe5bf662ec2d1d34b06a2c7fe94256e623d29 0 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c42bc1e1024fe5bf662ec2d1d34b06a2c7fe94256e623d29 0 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c42bc1e1024fe5bf662ec2d1d34b06a2c7fe94256e623d29 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.SCz 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.SCz 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.SCz 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f53f8a377bf6f9cda91b4ac7285ab9c17ee9c10c4052476e 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ijk 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f53f8a377bf6f9cda91b4ac7285ab9c17ee9c10c4052476e 2 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f53f8a377bf6f9cda91b4ac7285ab9c17ee9c10c4052476e 2 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f53f8a377bf6f9cda91b4ac7285ab9c17ee9c10c4052476e 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ijk 00:25:53.182 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ijk 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ijk 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=44c930faa48fbefd38751b2d523caf9c 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Wna 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 44c930faa48fbefd38751b2d523caf9c 1 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 44c930faa48fbefd38751b2d523caf9c 1 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=44c930faa48fbefd38751b2d523caf9c 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Wna 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Wna 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Wna 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bfae9a7ec1a6f83223d5779636c02ae0 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.cIY 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bfae9a7ec1a6f83223d5779636c02ae0 1 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bfae9a7ec1a6f83223d5779636c02ae0 1 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bfae9a7ec1a6f83223d5779636c02ae0 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:53.183 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.cIY 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.cIY 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.cIY 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=98314f6af6c3ad1341bf232a84c4020b8b47b7987689193f 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.y6l 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 98314f6af6c3ad1341bf232a84c4020b8b47b7987689193f 2 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 98314f6af6c3ad1341bf232a84c4020b8b47b7987689193f 2 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=98314f6af6c3ad1341bf232a84c4020b8b47b7987689193f 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.y6l 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.y6l 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.y6l 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8464f8613be08fda3dbda8ad1ec1373b 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.UIw 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8464f8613be08fda3dbda8ad1ec1373b 0 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8464f8613be08fda3dbda8ad1ec1373b 0 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8464f8613be08fda3dbda8ad1ec1373b 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.UIw 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.UIw 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.UIw 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a69fb03fced6a2fac4736f6f99e5bee57f77fa449de7b73b6ced6d8d071a4c59 00:25:53.441 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2Qw 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a69fb03fced6a2fac4736f6f99e5bee57f77fa449de7b73b6ced6d8d071a4c59 3 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a69fb03fced6a2fac4736f6f99e5bee57f77fa449de7b73b6ced6d8d071a4c59 3 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a69fb03fced6a2fac4736f6f99e5bee57f77fa449de7b73b6ced6d8d071a4c59 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2Qw 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2Qw 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.2Qw 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 110796 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 110796 ']' 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:53.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:53.442 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.700 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:53.700 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:53.700 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:53.700 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hu1 00:25:53.700 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.700 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.700 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.700 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.22B ]] 00:25:53.700 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.22B 00:25:53.700 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.700 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.960 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.960 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:53.960 14:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.SCz 00:25:53.960 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.960 14:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ijk ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ijk 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Wna 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.cIY ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cIY 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.y6l 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.UIw ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.UIw 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.2Qw 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:53.960 14:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:54.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:54.218 Waiting for block devices as requested 00:25:54.218 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:54.477 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:55.044 No valid GPT data, bailing 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:55.044 No valid GPT data, bailing 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:55.044 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:55.303 No valid GPT data, bailing 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:55.303 No valid GPT data, bailing 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -a 10.0.0.1 -t tcp -s 4420 00:25:55.303 00:25:55.303 Discovery Log Number of Records 2, Generation counter 2 00:25:55.303 =====Discovery Log Entry 0====== 00:25:55.303 trtype: tcp 00:25:55.303 adrfam: ipv4 00:25:55.303 subtype: current discovery subsystem 00:25:55.303 treq: not specified, sq flow control disable supported 00:25:55.303 portid: 1 00:25:55.303 trsvcid: 4420 00:25:55.303 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:55.303 traddr: 10.0.0.1 00:25:55.303 eflags: none 00:25:55.303 sectype: none 00:25:55.303 =====Discovery Log Entry 1====== 00:25:55.303 trtype: tcp 00:25:55.303 adrfam: ipv4 00:25:55.303 subtype: nvme subsystem 00:25:55.303 treq: not specified, sq flow control disable supported 00:25:55.303 portid: 1 00:25:55.303 trsvcid: 4420 00:25:55.303 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:55.303 traddr: 10.0.0.1 00:25:55.303 eflags: none 00:25:55.303 sectype: none 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.303 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.562 nvme0n1 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:25:55.562 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:55.563 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.563 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.563 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:55.563 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.563 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.563 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:55.563 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.563 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.563 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.563 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.563 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.563 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.819 nvme0n1 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.819 14:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.819 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.819 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.819 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.819 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.819 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:55.819 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.819 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.819 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:55.819 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.819 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.820 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.078 nvme0n1 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.078 nvme0n1 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.078 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.336 nvme0n1 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.336 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.337 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.595 nvme0n1 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.595 14:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.882 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.140 nvme0n1 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.140 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.141 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:57.141 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.141 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.141 nvme0n1 00:25:57.141 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.141 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.141 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.141 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.141 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.399 nvme0n1 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.399 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.658 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.659 nvme0n1 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.659 14:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.918 nvme0n1 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.918 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.485 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.743 nvme0n1 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.743 14:44:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.001 nvme0n1 00:25:59.001 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.001 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.001 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.001 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.001 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.001 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.001 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.001 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.001 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.001 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.002 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.260 nvme0n1 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:59.260 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.261 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.519 nvme0n1 00:25:59.519 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.519 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.519 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.519 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.519 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.519 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.519 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.519 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.519 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.519 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.778 14:44:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.778 nvme0n1 00:25:59.778 14:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.778 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.778 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.778 14:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.778 14:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.778 14:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.037 14:44:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.936 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.207 nvme0n1 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.207 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.801 nvme0n1 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.801 14:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.060 nvme0n1 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.060 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.318 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.319 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.576 nvme0n1 00:26:03.576 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.576 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.576 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.576 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.576 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.577 14:44:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.145 nvme0n1 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.145 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.146 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.713 nvme0n1 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.713 14:44:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.973 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.540 nvme0n1 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.540 14:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.105 nvme0n1 00:26:06.105 14:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.105 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.105 14:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.105 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.105 14:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.105 14:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.363 14:44:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 nvme0n1 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.930 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.866 nvme0n1 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.866 14:44:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.866 nvme0n1 00:26:07.866 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.866 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.866 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.866 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.866 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.867 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.126 nvme0n1 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.126 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.385 nvme0n1 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.385 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.386 nvme0n1 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.386 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.645 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.646 nvme0n1 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.646 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.905 nvme0n1 00:26:08.905 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.905 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.905 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.905 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.905 14:44:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.905 14:44:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.905 nvme0n1 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.905 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.164 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.165 nvme0n1 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.165 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.425 nvme0n1 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.425 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.684 nvme0n1 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.684 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.685 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.685 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.685 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.685 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.685 14:44:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.685 14:44:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.685 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.685 14:44:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.944 nvme0n1 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.944 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.203 nvme0n1 00:26:10.203 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.203 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.203 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.203 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.203 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.203 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.204 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.463 nvme0n1 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.463 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.722 nvme0n1 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.722 14:44:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.979 nvme0n1 00:26:10.979 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.980 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.546 nvme0n1 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.546 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.806 nvme0n1 00:26:11.806 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.806 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.806 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.806 14:44:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.806 14:44:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.806 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.374 nvme0n1 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.374 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.685 nvme0n1 00:26:12.685 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.685 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.685 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.685 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.685 14:44:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.944 14:44:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.944 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.202 nvme0n1 00:26:13.202 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.202 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.202 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.202 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.202 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.202 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.202 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.202 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.202 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.202 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.202 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.202 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.202 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.203 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.461 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.461 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.461 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.461 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.461 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.461 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.461 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.461 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.461 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.461 14:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.461 14:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.461 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.461 14:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.028 nvme0n1 00:26:14.028 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.028 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.028 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.028 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.028 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.028 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.028 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.028 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.028 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.028 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.028 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.028 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.028 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.029 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.595 nvme0n1 00:26:14.595 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.595 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.595 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.595 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.595 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.595 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.853 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.854 14:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.420 nvme0n1 00:26:15.420 14:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.420 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.420 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.420 14:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.421 14:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.988 nvme0n1 00:26:15.988 14:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.988 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.988 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.988 14:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.988 14:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.247 14:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.814 nvme0n1 00:26:16.814 14:44:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.814 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.072 nvme0n1 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.072 nvme0n1 00:26:17.072 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.331 nvme0n1 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.331 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.332 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.332 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.590 nvme0n1 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.590 nvme0n1 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.590 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.849 14:44:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.849 nvme0n1 00:26:17.849 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.849 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.849 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.849 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.849 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.849 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.108 nvme0n1 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:18.108 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.109 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.368 nvme0n1 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.368 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.685 nvme0n1 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.685 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.686 nvme0n1 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.686 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.944 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.944 14:44:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.944 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.944 14:44:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.944 nvme0n1 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.944 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:19.202 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.203 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 nvme0n1 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.462 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.721 nvme0n1 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.721 14:44:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.722 14:44:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.722 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.722 14:44:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.980 nvme0n1 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.980 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.239 nvme0n1 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.239 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.804 nvme0n1 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.804 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.805 14:44:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.805 14:44:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.805 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.805 14:44:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.062 nvme0n1 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.062 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.628 nvme0n1 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.628 14:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.886 nvme0n1 00:26:21.886 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.886 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.886 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.886 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.886 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.188 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.446 nvme0n1 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NjZjM2UxOWZhMjI5YjZhNjhjY2JhMzllY2Q5NDbmPoNi: 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: ]] 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWUxODZhNzgzMWMzNWZjYTgwYjZiNjE2MmIwYTU0NjA1YzZiODU0OWZkOThlODE2NDgzNTM5MDNmM2Y1ODZmMVQfMhs=: 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.446 14:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.378 nvme0n1 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.378 14:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.944 nvme0n1 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDRjOTMwZmFhNDhmYmVmZDM4NzUxYjJkNTIzY2FmOWMVk6b6: 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: ]] 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZhZTlhN2VjMWE2ZjgzMjIzZDU3Nzk2MzZjMDJhZTAEzJki: 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:23.944 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.945 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.511 nvme0n1 00:26:24.511 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.511 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.511 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.511 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.511 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.511 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTgzMTRmNmFmNmMzYWQxMzQxYmYyMzJhODRjNDAyMGI4YjQ3Yjc5ODc2ODkxOTNmxIgVGA==: 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: ]] 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQ2NGY4NjEzYmUwOGZkYTNkYmRhOGFkMWVjMTM3M2JPseXt: 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:24.769 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.770 14:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.338 nvme0n1 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTY5ZmIwM2ZjZWQ2YTJmYWM0NzM2ZjZmOTllNWJlZTU3Zjc3ZmE0NDlkZTdiNzNiNmNlZDZkOGQwNzFhNGM1OQVmSbM=: 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.338 14:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.276 nvme0n1 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQyYmMxZTEwMjRmZTViZjY2MmVjMmQxZDM0YjA2YTJjN2ZlOTQyNTZlNjIzZDI54+vRJA==: 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUzZjhhMzc3YmY2ZjljZGE5MWI0YWM3Mjg1YWI5YzE3ZWU5YzEwYzQwNTI0NzZlbNWbGA==: 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.276 2024/07/10 14:44:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:26:26.276 request: 00:26:26.276 { 00:26:26.276 "method": "bdev_nvme_attach_controller", 00:26:26.276 "params": { 00:26:26.276 "name": "nvme0", 00:26:26.276 "trtype": "tcp", 00:26:26.276 "traddr": "10.0.0.1", 00:26:26.276 "adrfam": "ipv4", 00:26:26.276 "trsvcid": "4420", 00:26:26.276 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:26.276 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:26.276 "prchk_reftag": false, 00:26:26.276 "prchk_guard": false, 00:26:26.276 "hdgst": false, 00:26:26.276 "ddgst": false 00:26:26.276 } 00:26:26.276 } 00:26:26.276 Got JSON-RPC error response 00:26:26.276 GoRPCClient: error on JSON-RPC call 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.276 2024/07/10 14:44:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:26:26.276 request: 00:26:26.276 { 00:26:26.276 "method": "bdev_nvme_attach_controller", 00:26:26.276 "params": { 00:26:26.276 "name": "nvme0", 00:26:26.276 "trtype": "tcp", 00:26:26.276 "traddr": "10.0.0.1", 00:26:26.276 "adrfam": "ipv4", 00:26:26.276 "trsvcid": "4420", 00:26:26.276 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:26.276 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:26.276 "prchk_reftag": false, 00:26:26.276 "prchk_guard": false, 00:26:26.276 "hdgst": false, 00:26:26.276 "ddgst": false, 00:26:26.276 "dhchap_key": "key2" 00:26:26.276 } 00:26:26.276 } 00:26:26.276 Got JSON-RPC error response 00:26:26.276 GoRPCClient: error on JSON-RPC call 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.276 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.277 2024/07/10 14:44:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:26:26.277 request: 00:26:26.277 { 00:26:26.277 "method": "bdev_nvme_attach_controller", 00:26:26.277 "params": { 00:26:26.277 "name": "nvme0", 00:26:26.277 "trtype": "tcp", 00:26:26.277 "traddr": "10.0.0.1", 00:26:26.277 "adrfam": "ipv4", 00:26:26.277 "trsvcid": "4420", 00:26:26.277 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:26.277 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:26.277 "prchk_reftag": false, 00:26:26.277 "prchk_guard": false, 00:26:26.277 "hdgst": false, 00:26:26.277 "ddgst": false, 00:26:26.277 "dhchap_key": "key1", 00:26:26.277 "dhchap_ctrlr_key": "ckey2" 00:26:26.277 } 00:26:26.277 } 00:26:26.277 Got JSON-RPC error response 00:26:26.277 GoRPCClient: error on JSON-RPC call 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:26.277 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:26.277 rmmod nvme_tcp 00:26:26.277 rmmod nvme_fabrics 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 110796 ']' 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 110796 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 110796 ']' 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 110796 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 110796 00:26:26.536 killing process with pid 110796 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 110796' 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 110796 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 110796 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:26.536 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:26.794 14:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:27.361 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:27.361 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:27.361 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:27.620 14:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.hu1 /tmp/spdk.key-null.SCz /tmp/spdk.key-sha256.Wna /tmp/spdk.key-sha384.y6l /tmp/spdk.key-sha512.2Qw /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:26:27.620 14:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:27.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:27.878 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:27.878 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:27.878 00:26:27.878 real 0m36.488s 00:26:27.878 user 0m32.424s 00:26:27.878 sys 0m3.674s 00:26:27.878 14:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:27.878 14:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.878 ************************************ 00:26:27.878 END TEST nvmf_auth_host 00:26:27.878 ************************************ 00:26:27.878 14:44:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:27.878 14:44:40 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:26:27.878 14:44:40 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:27.878 14:44:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:27.878 14:44:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:27.878 14:44:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:27.878 ************************************ 00:26:27.878 START TEST nvmf_digest 00:26:27.878 ************************************ 00:26:27.878 14:44:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:28.136 * Looking for test storage... 00:26:28.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:28.136 14:44:40 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:28.136 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:28.136 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.136 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:28.137 Cannot find device "nvmf_tgt_br" 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:28.137 Cannot find device "nvmf_tgt_br2" 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:28.137 Cannot find device "nvmf_tgt_br" 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:28.137 Cannot find device "nvmf_tgt_br2" 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:28.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:28.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:28.137 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:28.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:26:28.396 00:26:28.396 --- 10.0.0.2 ping statistics --- 00:26:28.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.396 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:28.396 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:28.396 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:26:28.396 00:26:28.396 --- 10.0.0.3 ping statistics --- 00:26:28.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.396 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:28.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:28.396 00:26:28.396 --- 10.0.0.1 ping statistics --- 00:26:28.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.396 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:28.396 ************************************ 00:26:28.396 START TEST nvmf_digest_clean 00:26:28.396 ************************************ 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=112391 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 112391 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112391 ']' 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:28.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.396 14:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:28.396 [2024-07-10 14:44:40.631977] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:26:28.396 [2024-07-10 14:44:40.632070] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.655 [2024-07-10 14:44:40.752117] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:28.655 [2024-07-10 14:44:40.772932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.655 [2024-07-10 14:44:40.813336] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.655 [2024-07-10 14:44:40.813387] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.655 [2024-07-10 14:44:40.813410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.655 [2024-07-10 14:44:40.813421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.655 [2024-07-10 14:44:40.813430] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.655 [2024-07-10 14:44:40.813463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:29.588 null0 00:26:29.588 [2024-07-10 14:44:41.713548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.588 [2024-07-10 14:44:41.737644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:29.588 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112441 00:26:29.589 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112441 /var/tmp/bperf.sock 00:26:29.589 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112441 ']' 00:26:29.589 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:29.589 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:29.589 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:29.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:29.589 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:29.589 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:29.589 14:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:29.589 [2024-07-10 14:44:41.800623] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:26:29.589 [2024-07-10 14:44:41.800741] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112441 ] 00:26:29.847 [2024-07-10 14:44:41.922817] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:29.847 [2024-07-10 14:44:41.940305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.847 [2024-07-10 14:44:41.981023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.780 14:44:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:30.780 14:44:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:30.780 14:44:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:30.780 14:44:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:30.780 14:44:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:31.038 14:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.038 14:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.296 nvme0n1 00:26:31.296 14:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:31.296 14:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:31.296 Running I/O for 2 seconds... 00:26:33.825 00:26:33.825 Latency(us) 00:26:33.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.825 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:33.825 nvme0n1 : 2.00 18141.00 70.86 0.00 0.00 7047.35 3783.21 12213.53 00:26:33.825 =================================================================================================================== 00:26:33.825 Total : 18141.00 70.86 0.00 0.00 7047.35 3783.21 12213.53 00:26:33.825 0 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:33.825 | select(.opcode=="crc32c") 00:26:33.825 | "\(.module_name) \(.executed)"' 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112441 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112441 ']' 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112441 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112441 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:33.825 killing process with pid 112441 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112441' 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112441 00:26:33.825 Received shutdown signal, test time was about 2.000000 seconds 00:26:33.825 00:26:33.825 Latency(us) 00:26:33.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.825 =================================================================================================================== 00:26:33.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:33.825 14:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112441 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112526 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112526 /var/tmp/bperf.sock 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112526 ']' 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:33.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:33.825 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:33.825 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:33.825 Zero copy mechanism will not be used. 00:26:33.825 [2024-07-10 14:44:46.059511] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:26:33.825 [2024-07-10 14:44:46.059610] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112526 ] 00:26:34.083 [2024-07-10 14:44:46.181103] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:34.083 [2024-07-10 14:44:46.196450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.083 [2024-07-10 14:44:46.231711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.083 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:34.083 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:34.083 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:34.083 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:34.083 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:34.343 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.343 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.909 nvme0n1 00:26:34.909 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:34.909 14:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:34.909 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:34.909 Zero copy mechanism will not be used. 00:26:34.909 Running I/O for 2 seconds... 00:26:36.837 00:26:36.837 Latency(us) 00:26:36.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.837 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:36.837 nvme0n1 : 2.00 7603.10 950.39 0.00 0.00 2100.35 659.08 3678.95 00:26:36.837 =================================================================================================================== 00:26:36.837 Total : 7603.10 950.39 0.00 0.00 2100.35 659.08 3678.95 00:26:36.837 0 00:26:37.181 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:37.181 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:37.181 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:37.181 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:37.182 | select(.opcode=="crc32c") 00:26:37.182 | "\(.module_name) \(.executed)"' 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112526 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112526 ']' 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112526 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112526 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:37.182 killing process with pid 112526 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112526' 00:26:37.182 Received shutdown signal, test time was about 2.000000 seconds 00:26:37.182 00:26:37.182 Latency(us) 00:26:37.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.182 =================================================================================================================== 00:26:37.182 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112526 00:26:37.182 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112526 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112597 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112597 /var/tmp/bperf.sock 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112597 ']' 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:37.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:37.439 14:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:37.439 [2024-07-10 14:44:49.618075] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:26:37.439 [2024-07-10 14:44:49.618174] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112597 ] 00:26:37.698 [2024-07-10 14:44:49.742015] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:37.698 [2024-07-10 14:44:49.759561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.698 [2024-07-10 14:44:49.806233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.631 14:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:38.631 14:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:38.631 14:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:38.631 14:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:38.631 14:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:38.631 14:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.631 14:44:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.197 nvme0n1 00:26:39.197 14:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:39.197 14:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:39.197 Running I/O for 2 seconds... 00:26:41.101 00:26:41.101 Latency(us) 00:26:41.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.101 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:41.101 nvme0n1 : 2.01 20710.10 80.90 0.00 0.00 6170.14 3202.33 12809.31 00:26:41.101 =================================================================================================================== 00:26:41.101 Total : 20710.10 80.90 0.00 0.00 6170.14 3202.33 12809.31 00:26:41.101 0 00:26:41.360 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:41.360 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:41.360 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:41.360 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:41.360 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:41.360 | select(.opcode=="crc32c") 00:26:41.360 | "\(.module_name) \(.executed)"' 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112597 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112597 ']' 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112597 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112597 00:26:41.619 killing process with pid 112597 00:26:41.619 Received shutdown signal, test time was about 2.000000 seconds 00:26:41.619 00:26:41.619 Latency(us) 00:26:41.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.619 =================================================================================================================== 00:26:41.619 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112597' 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112597 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112597 00:26:41.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112688 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112688 /var/tmp/bperf.sock 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112688 ']' 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:41.619 14:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:41.619 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:41.619 Zero copy mechanism will not be used. 00:26:41.619 [2024-07-10 14:44:53.897758] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:26:41.619 [2024-07-10 14:44:53.897892] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112688 ] 00:26:41.877 [2024-07-10 14:44:54.015867] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:41.877 [2024-07-10 14:44:54.035340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.877 [2024-07-10 14:44:54.073110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.877 14:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:41.877 14:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:41.877 14:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:41.877 14:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:41.877 14:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:42.444 14:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.444 14:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.702 nvme0n1 00:26:42.702 14:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:42.702 14:44:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:42.702 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:42.702 Zero copy mechanism will not be used. 00:26:42.702 Running I/O for 2 seconds... 00:26:45.238 00:26:45.238 Latency(us) 00:26:45.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.238 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:45.238 nvme0n1 : 2.00 6258.75 782.34 0.00 0.00 2550.38 1854.37 8936.73 00:26:45.238 =================================================================================================================== 00:26:45.238 Total : 6258.75 782.34 0.00 0.00 2550.38 1854.37 8936.73 00:26:45.238 0 00:26:45.238 14:44:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:45.238 14:44:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:45.238 14:44:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:45.238 14:44:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:45.238 | select(.opcode=="crc32c") 00:26:45.238 | "\(.module_name) \(.executed)"' 00:26:45.238 14:44:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112688 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112688 ']' 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112688 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112688 00:26:45.238 killing process with pid 112688 00:26:45.238 Received shutdown signal, test time was about 2.000000 seconds 00:26:45.238 00:26:45.238 Latency(us) 00:26:45.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.238 =================================================================================================================== 00:26:45.238 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112688' 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112688 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112688 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 112391 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112391 ']' 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112391 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112391 00:26:45.238 killing process with pid 112391 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112391' 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112391 00:26:45.238 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112391 00:26:45.496 00:26:45.496 real 0m16.980s 00:26:45.496 user 0m32.725s 00:26:45.496 sys 0m4.167s 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:45.496 ************************************ 00:26:45.496 END TEST nvmf_digest_clean 00:26:45.496 ************************************ 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.496 ************************************ 00:26:45.496 START TEST nvmf_digest_error 00:26:45.496 ************************************ 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=112784 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 112784 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112784 ']' 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:45.496 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.497 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.497 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.497 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.497 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.497 [2024-07-10 14:44:57.662739] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:26:45.497 [2024-07-10 14:44:57.662830] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.497 [2024-07-10 14:44:57.785510] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:45.755 [2024-07-10 14:44:57.804309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.755 [2024-07-10 14:44:57.839320] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.755 [2024-07-10 14:44:57.839376] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.755 [2024-07-10 14:44:57.839387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.755 [2024-07-10 14:44:57.839396] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.755 [2024-07-10 14:44:57.839404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.755 [2024-07-10 14:44:57.839434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.755 [2024-07-10 14:44:57.915830] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.755 14:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.755 null0 00:26:45.755 [2024-07-10 14:44:57.983693] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.755 [2024-07-10 14:44:58.007863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.755 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112815 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112815 /var/tmp/bperf.sock 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112815 ']' 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.756 14:44:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.014 [2024-07-10 14:44:58.076078] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:26:46.014 [2024-07-10 14:44:58.076212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112815 ] 00:26:46.014 [2024-07-10 14:44:58.202605] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:46.014 [2024-07-10 14:44:58.218949] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.014 [2024-07-10 14:44:58.260588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.948 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:46.948 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:46.948 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:46.948 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:47.206 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:47.206 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.206 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:47.206 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.206 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.206 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.464 nvme0n1 00:26:47.464 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:47.464 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.464 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:47.464 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.464 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:47.464 14:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:47.464 Running I/O for 2 seconds... 00:26:47.741 [2024-07-10 14:44:59.763139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.741 [2024-07-10 14:44:59.763212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.741 [2024-07-10 14:44:59.763229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.741 [2024-07-10 14:44:59.775107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.741 [2024-07-10 14:44:59.775171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.741 [2024-07-10 14:44:59.775187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.741 [2024-07-10 14:44:59.791192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.741 [2024-07-10 14:44:59.791259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.741 [2024-07-10 14:44:59.791275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.741 [2024-07-10 14:44:59.803457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.741 [2024-07-10 14:44:59.803517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.741 [2024-07-10 14:44:59.803533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.741 [2024-07-10 14:44:59.819683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.741 [2024-07-10 14:44:59.819746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.741 [2024-07-10 14:44:59.819761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.741 [2024-07-10 14:44:59.831031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.741 [2024-07-10 14:44:59.831094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.741 [2024-07-10 14:44:59.831110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.741 [2024-07-10 14:44:59.844979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.741 [2024-07-10 14:44:59.845045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.741 [2024-07-10 14:44:59.845060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.741 [2024-07-10 14:44:59.859749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.741 [2024-07-10 14:44:59.859809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.741 [2024-07-10 14:44:59.859824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.741 [2024-07-10 14:44:59.873028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.741 [2024-07-10 14:44:59.873092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.741 [2024-07-10 14:44:59.873108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.741 [2024-07-10 14:44:59.887573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.741 [2024-07-10 14:44:59.887638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.742 [2024-07-10 14:44:59.887653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.742 [2024-07-10 14:44:59.903070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.742 [2024-07-10 14:44:59.903133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.742 [2024-07-10 14:44:59.903148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.742 [2024-07-10 14:44:59.917124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.742 [2024-07-10 14:44:59.917198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.742 [2024-07-10 14:44:59.917214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.742 [2024-07-10 14:44:59.932175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.742 [2024-07-10 14:44:59.932229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.742 [2024-07-10 14:44:59.932245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.742 [2024-07-10 14:44:59.945782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.742 [2024-07-10 14:44:59.945859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.742 [2024-07-10 14:44:59.945875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.742 [2024-07-10 14:44:59.961663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.742 [2024-07-10 14:44:59.961731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.742 [2024-07-10 14:44:59.961747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.742 [2024-07-10 14:44:59.976274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.742 [2024-07-10 14:44:59.976344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.742 [2024-07-10 14:44:59.976360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.742 [2024-07-10 14:44:59.989514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.742 [2024-07-10 14:44:59.989574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.742 [2024-07-10 14:44:59.989590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.742 [2024-07-10 14:45:00.004380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.742 [2024-07-10 14:45:00.004458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.742 [2024-07-10 14:45:00.004475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.742 [2024-07-10 14:45:00.017925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:47.742 [2024-07-10 14:45:00.017991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.742 [2024-07-10 14:45:00.018009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.033755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.033825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.033840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.049702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.049774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.049790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.066142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.066210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.066226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.080117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.080189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.080204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.092980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.093047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.093063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.107682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.107750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.107767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.124614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.124681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.124696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.141387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.141456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.141473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.152326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.152386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.152402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.169047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.169113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.169129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.184014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.184074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.184091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.197855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.197920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.197934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.210414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.210478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.210494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.224128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.224193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.224209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.238346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.238418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.238434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.252932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.253003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.253019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.269186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.269261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.269277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.005 [2024-07-10 14:45:00.281753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.005 [2024-07-10 14:45:00.281813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.005 [2024-07-10 14:45:00.281829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.264 [2024-07-10 14:45:00.296443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.264 [2024-07-10 14:45:00.296511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.264 [2024-07-10 14:45:00.296526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.264 [2024-07-10 14:45:00.311703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.264 [2024-07-10 14:45:00.311768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.264 [2024-07-10 14:45:00.311784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.264 [2024-07-10 14:45:00.326509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.264 [2024-07-10 14:45:00.326578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.264 [2024-07-10 14:45:00.326593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.264 [2024-07-10 14:45:00.341304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.264 [2024-07-10 14:45:00.341369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.264 [2024-07-10 14:45:00.341385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.264 [2024-07-10 14:45:00.354791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.264 [2024-07-10 14:45:00.354856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.264 [2024-07-10 14:45:00.354873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.264 [2024-07-10 14:45:00.369931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.264 [2024-07-10 14:45:00.369998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.264 [2024-07-10 14:45:00.370014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.264 [2024-07-10 14:45:00.384239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.264 [2024-07-10 14:45:00.384314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.264 [2024-07-10 14:45:00.384331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.264 [2024-07-10 14:45:00.399167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.264 [2024-07-10 14:45:00.399236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.264 [2024-07-10 14:45:00.399253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.264 [2024-07-10 14:45:00.411851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.264 [2024-07-10 14:45:00.411912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.264 [2024-07-10 14:45:00.411927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.264 [2024-07-10 14:45:00.427409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.264 [2024-07-10 14:45:00.427480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.264 [2024-07-10 14:45:00.427496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.264 [2024-07-10 14:45:00.440891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.264 [2024-07-10 14:45:00.440960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.264 [2024-07-10 14:45:00.440976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.264 [2024-07-10 14:45:00.454727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.264 [2024-07-10 14:45:00.454793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.264 [2024-07-10 14:45:00.454809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.264 [2024-07-10 14:45:00.470554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.264 [2024-07-10 14:45:00.470627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.264 [2024-07-10 14:45:00.470644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.265 [2024-07-10 14:45:00.486946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.265 [2024-07-10 14:45:00.487012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.265 [2024-07-10 14:45:00.487029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.265 [2024-07-10 14:45:00.499927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.265 [2024-07-10 14:45:00.499992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.265 [2024-07-10 14:45:00.500008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.265 [2024-07-10 14:45:00.514615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.265 [2024-07-10 14:45:00.514678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.265 [2024-07-10 14:45:00.514694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.265 [2024-07-10 14:45:00.530219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.265 [2024-07-10 14:45:00.530306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.265 [2024-07-10 14:45:00.530324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.265 [2024-07-10 14:45:00.543457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.265 [2024-07-10 14:45:00.543523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.265 [2024-07-10 14:45:00.543538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.523 [2024-07-10 14:45:00.558849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.523 [2024-07-10 14:45:00.558914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.523 [2024-07-10 14:45:00.558931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.523 [2024-07-10 14:45:00.573783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.523 [2024-07-10 14:45:00.573847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.523 [2024-07-10 14:45:00.573862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.523 [2024-07-10 14:45:00.587838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.523 [2024-07-10 14:45:00.587899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.523 [2024-07-10 14:45:00.587914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.523 [2024-07-10 14:45:00.600956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.523 [2024-07-10 14:45:00.601016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.523 [2024-07-10 14:45:00.601031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.523 [2024-07-10 14:45:00.614908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.523 [2024-07-10 14:45:00.614973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.523 [2024-07-10 14:45:00.614988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.523 [2024-07-10 14:45:00.629333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.523 [2024-07-10 14:45:00.629397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.523 [2024-07-10 14:45:00.629412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.523 [2024-07-10 14:45:00.644529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.523 [2024-07-10 14:45:00.644600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.523 [2024-07-10 14:45:00.644616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.523 [2024-07-10 14:45:00.658696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.524 [2024-07-10 14:45:00.658772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.524 [2024-07-10 14:45:00.658788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.524 [2024-07-10 14:45:00.671930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.524 [2024-07-10 14:45:00.671995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.524 [2024-07-10 14:45:00.672010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.524 [2024-07-10 14:45:00.685934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.524 [2024-07-10 14:45:00.686000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.524 [2024-07-10 14:45:00.686017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.524 [2024-07-10 14:45:00.699072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.524 [2024-07-10 14:45:00.699143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.524 [2024-07-10 14:45:00.699159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.524 [2024-07-10 14:45:00.714934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.524 [2024-07-10 14:45:00.715003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.524 [2024-07-10 14:45:00.715019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.524 [2024-07-10 14:45:00.729292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.524 [2024-07-10 14:45:00.729355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.524 [2024-07-10 14:45:00.729370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.524 [2024-07-10 14:45:00.743547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.524 [2024-07-10 14:45:00.743613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.524 [2024-07-10 14:45:00.743629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.524 [2024-07-10 14:45:00.758021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.524 [2024-07-10 14:45:00.758086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.524 [2024-07-10 14:45:00.758102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.524 [2024-07-10 14:45:00.770024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.524 [2024-07-10 14:45:00.770088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.524 [2024-07-10 14:45:00.770103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.524 [2024-07-10 14:45:00.785100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.524 [2024-07-10 14:45:00.785166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.524 [2024-07-10 14:45:00.785181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.524 [2024-07-10 14:45:00.799209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.524 [2024-07-10 14:45:00.799273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.524 [2024-07-10 14:45:00.799302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.782 [2024-07-10 14:45:00.813992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.782 [2024-07-10 14:45:00.814059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.782 [2024-07-10 14:45:00.814074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.782 [2024-07-10 14:45:00.828982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.782 [2024-07-10 14:45:00.829047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.782 [2024-07-10 14:45:00.829062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.782 [2024-07-10 14:45:00.842053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.782 [2024-07-10 14:45:00.842120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:00.842135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:00.857155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:00.857228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:00.857250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:00.869512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:00.869570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:00.869586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:00.883371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:00.883440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:00.883455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:00.898856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:00.898926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:00.898941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:00.911952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:00.912021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:00.912036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:00.925581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:00.925650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:00.925666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:00.939422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:00.939476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:00.939492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:00.954219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:00.954423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:00.954542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:00.966721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:00.966931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:00.967021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:00.982324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:00.982503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:00.982612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:00.995608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:00.995833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:00.995930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:01.011063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:01.011301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:01.011385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:01.027370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:01.027614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:01.027710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:01.043352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:01.043616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:01.043718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:01.055579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:01.055794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:01.055900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.783 [2024-07-10 14:45:01.070719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:48.783 [2024-07-10 14:45:01.070912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.783 [2024-07-10 14:45:01.070999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.041 [2024-07-10 14:45:01.086196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.041 [2024-07-10 14:45:01.086385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.041 [2024-07-10 14:45:01.086511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.041 [2024-07-10 14:45:01.098075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.041 [2024-07-10 14:45:01.098127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.041 [2024-07-10 14:45:01.098142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.114138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.114197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.114212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.129319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.129374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.129389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.144371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.144423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.144439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.156627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.156684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.156699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.171765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.171828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.171844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.186553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.186626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.186641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.201862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.201926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.201942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.213929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.213995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.214011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.229371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.229430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.229445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.244726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.244787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.244802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.260801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.260865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.260881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.273559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.273611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.273625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.286737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.286793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.286808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.301188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.301246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.301262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.315989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.316062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.316077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.042 [2024-07-10 14:45:01.329903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.042 [2024-07-10 14:45:01.329969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.042 [2024-07-10 14:45:01.329985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.342879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.342945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.342960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.355794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.355852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.355867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.370853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.370920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.370935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.385957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.386021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.386035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.398293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.398364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.398378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.413087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.413151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.413167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.425918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.425984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.426000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.440340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.440400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.440415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.454972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.455027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.455042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.467702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.467753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.467768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.482542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.482601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.482617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.497202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.497256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.497272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.510146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.510206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.510221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.524983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.525042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.525057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.539661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.539728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.539754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.552533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.552596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.300 [2024-07-10 14:45:01.552611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.300 [2024-07-10 14:45:01.566876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.300 [2024-07-10 14:45:01.566945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.301 [2024-07-10 14:45:01.566960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.301 [2024-07-10 14:45:01.580012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.301 [2024-07-10 14:45:01.580089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.301 [2024-07-10 14:45:01.580105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.558 [2024-07-10 14:45:01.596393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.558 [2024-07-10 14:45:01.596465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.558 [2024-07-10 14:45:01.596482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.558 [2024-07-10 14:45:01.609521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.558 [2024-07-10 14:45:01.609584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.558 [2024-07-10 14:45:01.609600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.558 [2024-07-10 14:45:01.624968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.559 [2024-07-10 14:45:01.625040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.559 [2024-07-10 14:45:01.625056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.559 [2024-07-10 14:45:01.640930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.559 [2024-07-10 14:45:01.640992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.559 [2024-07-10 14:45:01.641008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.559 [2024-07-10 14:45:01.653147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.559 [2024-07-10 14:45:01.653213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.559 [2024-07-10 14:45:01.653229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.559 [2024-07-10 14:45:01.667895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.559 [2024-07-10 14:45:01.667960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.559 [2024-07-10 14:45:01.667976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.559 [2024-07-10 14:45:01.683924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.559 [2024-07-10 14:45:01.683985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.559 [2024-07-10 14:45:01.684000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.559 [2024-07-10 14:45:01.698357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.559 [2024-07-10 14:45:01.698428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.559 [2024-07-10 14:45:01.698443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.559 [2024-07-10 14:45:01.711602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.559 [2024-07-10 14:45:01.711663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.559 [2024-07-10 14:45:01.711678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.559 [2024-07-10 14:45:01.728114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.559 [2024-07-10 14:45:01.728182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.559 [2024-07-10 14:45:01.728198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.559 [2024-07-10 14:45:01.743473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf13170) 00:26:49.559 [2024-07-10 14:45:01.743530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.559 [2024-07-10 14:45:01.743544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.559 00:26:49.559 Latency(us) 00:26:49.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.559 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:49.559 nvme0n1 : 2.00 17747.50 69.33 0.00 0.00 7203.75 3932.16 19422.49 00:26:49.559 =================================================================================================================== 00:26:49.559 Total : 17747.50 69.33 0.00 0.00 7203.75 3932.16 19422.49 00:26:49.559 0 00:26:49.559 14:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:49.559 14:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:49.559 14:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:49.559 | .driver_specific 00:26:49.559 | .nvme_error 00:26:49.559 | .status_code 00:26:49.559 | .command_transient_transport_error' 00:26:49.559 14:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:49.817 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 139 > 0 )) 00:26:49.817 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112815 00:26:49.817 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112815 ']' 00:26:49.817 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112815 00:26:49.817 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:49.817 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:49.817 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112815 00:26:49.817 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:49.817 killing process with pid 112815 00:26:49.817 Received shutdown signal, test time was about 2.000000 seconds 00:26:49.817 00:26:49.817 Latency(us) 00:26:49.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.817 =================================================================================================================== 00:26:49.817 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.817 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:49.817 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112815' 00:26:49.817 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112815 00:26:49.817 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112815 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112904 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112904 /var/tmp/bperf.sock 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112904 ']' 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:50.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:50.075 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:50.075 [2024-07-10 14:45:02.295214] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:26:50.075 [2024-07-10 14:45:02.295342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112904 ] 00:26:50.075 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:50.075 Zero copy mechanism will not be used. 00:26:50.333 [2024-07-10 14:45:02.416833] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:50.333 [2024-07-10 14:45:02.429474] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.333 [2024-07-10 14:45:02.467356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.333 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:50.333 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:50.333 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:50.333 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:50.590 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:50.590 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.590 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:50.590 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.590 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.590 14:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.155 nvme0n1 00:26:51.155 14:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:51.155 14:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.155 14:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:51.156 14:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.156 14:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:51.156 14:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:51.156 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:51.156 Zero copy mechanism will not be used. 00:26:51.156 Running I/O for 2 seconds... 00:26:51.156 [2024-07-10 14:45:03.298569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.298637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.298662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.302460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.302506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.302520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.307986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.308034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.308049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.313278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.313337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.313352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.318592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.318646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.318665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.321582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.321625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.321639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.326619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.326664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.326679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.330800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.330847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.330861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.335325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.335372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.335388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.339466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.339511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.339526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.344399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.344454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.344469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.348973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.349018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.349032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.352986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.353030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.353044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.357943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.357989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.358009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.362459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.362502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.362516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.366916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.366962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.366977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.371829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.371875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.371889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.376012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.376068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.376085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.380734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.380786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.380802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.384695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.384738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.384753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.389208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.389254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.389268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.393764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.393809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.393825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.398614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.398659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.398673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.402352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.402401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.402421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.406908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.406954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.406968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.411848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.411893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.411917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.417331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.417374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.156 [2024-07-10 14:45:03.417388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.156 [2024-07-10 14:45:03.421805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.156 [2024-07-10 14:45:03.421850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-07-10 14:45:03.421865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.157 [2024-07-10 14:45:03.425418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.157 [2024-07-10 14:45:03.425466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-07-10 14:45:03.425482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.157 [2024-07-10 14:45:03.429739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.157 [2024-07-10 14:45:03.429783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-07-10 14:45:03.429798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.157 [2024-07-10 14:45:03.434466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.157 [2024-07-10 14:45:03.434510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-07-10 14:45:03.434525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.157 [2024-07-10 14:45:03.438843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.157 [2024-07-10 14:45:03.438889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-07-10 14:45:03.438905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.157 [2024-07-10 14:45:03.442412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.157 [2024-07-10 14:45:03.442457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-07-10 14:45:03.442471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.416 [2024-07-10 14:45:03.447247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.416 [2024-07-10 14:45:03.447306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.416 [2024-07-10 14:45:03.447323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.416 [2024-07-10 14:45:03.452966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.416 [2024-07-10 14:45:03.453021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.416 [2024-07-10 14:45:03.453037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.416 [2024-07-10 14:45:03.457838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.416 [2024-07-10 14:45:03.457883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.416 [2024-07-10 14:45:03.457897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.416 [2024-07-10 14:45:03.462795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.416 [2024-07-10 14:45:03.462842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.416 [2024-07-10 14:45:03.462857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.416 [2024-07-10 14:45:03.465781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.416 [2024-07-10 14:45:03.465825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.416 [2024-07-10 14:45:03.465840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.416 [2024-07-10 14:45:03.470387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.416 [2024-07-10 14:45:03.470431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.416 [2024-07-10 14:45:03.470446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.416 [2024-07-10 14:45:03.474687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.416 [2024-07-10 14:45:03.474731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.416 [2024-07-10 14:45:03.474747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.416 [2024-07-10 14:45:03.478857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.416 [2024-07-10 14:45:03.478911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.416 [2024-07-10 14:45:03.478933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.483140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.483184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.483199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.487429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.487472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.487487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.492318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.492361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.492375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.496714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.496761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.496778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.501322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.501363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.501377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.506246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.506311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.506327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.510224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.510273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.510311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.515003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.515048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.515062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.520374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.520430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.520448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.524781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.524823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.524837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.528513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.528560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.528575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.533978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.534036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.534052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.537559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.537607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.537622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.542734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.542786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.542802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.547791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.547842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.547856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.552775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.552836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.552852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.558479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.558529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.558543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.562846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.562903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.562918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.567376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.567427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.567441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.572999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.573072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.573095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.577967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.578052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.578071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.582944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.583012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.583034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.587910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.587980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.587996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.593117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.593166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.593182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.598713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.598797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.598814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.602924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.602989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.603005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.608258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.608343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.608359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.613672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.613753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.613771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.617653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.617721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.617737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.623337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.623405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.623421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.628181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.628232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.628247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.632465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.632513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.632535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.637645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.637697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.637713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.641959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.642027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.642043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.646823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.646884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.646900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.652423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.652485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.652501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.656127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.656181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.656196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.660888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.660939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.660954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.666180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.666235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.666250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.669250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.669319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.669343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.674464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.674538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.674554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.679036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.679084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.679100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.683005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.683059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.683075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.687696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.687741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.687756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.693367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.693416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.693431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.696521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.696560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.696573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.417 [2024-07-10 14:45:03.701000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.417 [2024-07-10 14:45:03.701047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.417 [2024-07-10 14:45:03.701061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.706063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.706115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.706130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.711745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.711792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.711807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.717505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.717555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.717570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.721204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.721251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.721266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.726118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.726170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.726185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.731350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.731397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.731411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.736442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.736499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.736515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.742126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.742175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.742197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.746164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.746209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.746224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.750684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.750730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.750749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.755953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.755999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.756015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.761858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.761918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.761933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.766955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.767017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.767034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.770460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.770507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.770522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.776035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.776089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.776104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.780978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.781025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.781041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.784585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.784629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.784644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.789570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.789621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.789645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.794972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.795021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.795036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.800224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.800305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.800324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.805490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.805539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.805555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.808995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.809039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.809054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.814409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.814467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.814483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.820004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.820071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.820088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.825680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.825734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.825752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.829591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.829641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.829656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.834366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.834418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.834433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.677 [2024-07-10 14:45:03.838797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.677 [2024-07-10 14:45:03.838855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.677 [2024-07-10 14:45:03.838873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.842235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.842293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.842310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.847679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.847730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.847745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.851275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.851343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.851365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.856067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.856118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.856133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.861174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.861226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.861241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.865014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.865074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.865090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.869321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.869366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.869380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.873094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.873141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.873156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.877940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.877991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.878007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.883653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.883708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.883723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.886858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.886904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.886919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.891904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.891953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.891968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.897448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.897498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.897513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.902788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.902846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.902863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.907365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.907411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.907426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.910710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.910757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.910771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.916309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.916356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.916371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.921585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.921633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.921648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.926517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.926572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.926587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.931156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.931205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.931219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.934272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.934347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.934364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.939261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.939325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.939340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.943593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.943648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.943664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.947923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.947971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.947986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.952264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.952322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.952338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.956611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.956655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.956670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.961429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.961474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.961489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.678 [2024-07-10 14:45:03.965077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.678 [2024-07-10 14:45:03.965125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.678 [2024-07-10 14:45:03.965148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.936 [2024-07-10 14:45:03.969708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.936 [2024-07-10 14:45:03.969758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.936 [2024-07-10 14:45:03.969772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.936 [2024-07-10 14:45:03.974694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.936 [2024-07-10 14:45:03.974743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:03.974758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:03.978782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:03.978827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:03.978841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:03.983102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:03.983149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:03.983163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:03.988090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:03.988138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:03.988152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:03.992410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:03.992453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:03.992469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:03.997330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:03.997375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:03.997389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.002104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.002154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.002169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.005704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.005749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.005763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.010950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.010998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.011012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.016575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.016623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.016637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.021444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.021489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.021504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.024880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.024926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.024940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.030461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.030521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.030537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.035685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.035736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.035751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.038638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.038680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.038694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.043957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.044004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.044020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.049074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.049125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.049140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.054658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.054710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.054730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.058431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.058477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.058492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.062214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.062260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.062275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.068063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.068113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.068128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.073363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.073426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.073442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.078171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.078246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.078264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.081669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.081720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.081735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.086857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.086914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.086929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.092365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.092439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.092456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.098335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.098420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.098441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.101637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.101686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.101701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.106740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.106796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.106812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.112224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.112277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.112308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.117125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.117172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.117186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.121952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.121996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.122012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.125630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.125683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.125700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.129739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.129784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.129799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.133920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.133966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.133981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.138633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.138683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.138698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.142156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.142204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.142219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.146990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.147054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.147074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.151964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.937 [2024-07-10 14:45:04.152018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.937 [2024-07-10 14:45:04.152034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.937 [2024-07-10 14:45:04.156202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.156256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.156272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.160690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.160756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.160773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.164728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.164780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.164795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.169621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.169679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.169696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.173659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.173704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.173718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.178630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.178675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.178690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.184065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.184111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.184127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.187858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.187901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.187915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.192644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.192693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.192708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.197436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.197484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.197500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.203171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.203222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.203241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.208450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.208497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.208513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.211523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.211567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.211581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.216334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.216383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.216398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.938 [2024-07-10 14:45:04.221543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:51.938 [2024-07-10 14:45:04.221590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.938 [2024-07-10 14:45:04.221605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.226504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.226551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.226566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.230066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.230115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.230130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.234713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.234794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.234812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.239347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.239415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.239430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.243677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.243755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.243771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.248433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.248504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.248525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.253217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.253301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.253318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.257495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.257550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.257570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.261975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.262030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.262046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.266901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.266984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.267001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.270224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.270299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.270317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.275269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.275336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.275351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.280224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.280324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.280344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.284132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.284199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.284214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.288662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.288723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.288739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.293674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.293731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.293746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.297896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.297972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.297990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.302549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.302618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.302636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.306792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.306853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.306870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.311579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.311660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.311675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.316840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.316942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.316958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.198 [2024-07-10 14:45:04.321301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.198 [2024-07-10 14:45:04.321357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.198 [2024-07-10 14:45:04.321380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.325933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.325988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.326002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.330512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.330573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.330588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.335374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.335455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.335472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.340134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.340218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.340235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.344153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.344225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.344240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.349481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.349551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.349568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.353370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.353436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.353459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.358259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.358326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.358343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.363476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.363529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.363544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.366865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.366911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.366925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.371592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.371642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.371660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.376663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.376713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.376727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.380199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.380255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.380272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.383884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.383930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.383945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.389269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.389331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.389347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.394242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.394303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.394319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.397882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.397931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.397953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.403423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.403475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.403490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.408551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.408601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.408616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.412540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.412589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.412605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.416803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.416870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.416902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.421462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.421510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.421525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.425834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.425888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.425909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.430327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.430374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.430388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.434817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.434864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.434879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.439208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.439257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.439272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.444668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.444727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.444743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.448791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.448839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.448854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.199 [2024-07-10 14:45:04.453784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.199 [2024-07-10 14:45:04.453840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.199 [2024-07-10 14:45:04.453855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.200 [2024-07-10 14:45:04.457775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.200 [2024-07-10 14:45:04.457826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.200 [2024-07-10 14:45:04.457841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.200 [2024-07-10 14:45:04.462250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.200 [2024-07-10 14:45:04.462316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.200 [2024-07-10 14:45:04.462331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.200 [2024-07-10 14:45:04.466843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.200 [2024-07-10 14:45:04.466893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.200 [2024-07-10 14:45:04.466909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.200 [2024-07-10 14:45:04.470751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.200 [2024-07-10 14:45:04.470797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.200 [2024-07-10 14:45:04.470812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.200 [2024-07-10 14:45:04.475660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.200 [2024-07-10 14:45:04.475715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.200 [2024-07-10 14:45:04.475729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.200 [2024-07-10 14:45:04.480652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.200 [2024-07-10 14:45:04.480715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.200 [2024-07-10 14:45:04.480736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.200 [2024-07-10 14:45:04.483869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.200 [2024-07-10 14:45:04.483919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.200 [2024-07-10 14:45:04.483934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.488886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.488939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.488954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.493532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.493589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.493608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.498365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.498422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.498438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.502321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.502384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.502404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.506735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.506790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.506806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.511948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.512007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.512022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.516116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.516190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.516207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.520578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.520638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.520653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.524521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.524570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.524585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.529302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.529357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.529372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.533193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.533246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.533261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.537530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.537585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.537600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.542093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.542148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.542163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.547133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.547193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.547209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.551805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.551865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.551880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.555556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.555614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.555629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.561009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.561068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.561083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.566326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.566390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.566405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.570350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.570409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.570424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.576492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.576556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.576572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.460 [2024-07-10 14:45:04.582155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.460 [2024-07-10 14:45:04.582223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.460 [2024-07-10 14:45:04.582242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.586348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.586414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.586429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.591184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.591251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.591270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.596410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.596477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.596496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.601325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.601393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.601413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.606653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.606721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.606741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.610033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.610092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.610111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.614987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.615041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.615056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.619826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.619883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.619898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.625233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.625307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.625326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.629151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.629206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.629220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.633687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.633738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.633754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.638798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.638853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.638868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.643564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.643620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.643635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.648006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.648070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.648089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.652956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.653028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.653048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.657784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.657850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.657870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.662353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.662419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.662450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.667307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.667376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.667399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.670832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.670888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.670907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.675394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.675462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.675480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.680402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.680466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.680482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.684213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.684273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.684310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.688786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.688849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.688879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.693608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.693660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.693675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.697846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.697898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.697913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.702129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.702186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.702206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.707269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.461 [2024-07-10 14:45:04.707330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.461 [2024-07-10 14:45:04.707346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.461 [2024-07-10 14:45:04.712210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.462 [2024-07-10 14:45:04.712261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.462 [2024-07-10 14:45:04.712277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.462 [2024-07-10 14:45:04.716159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.462 [2024-07-10 14:45:04.716221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.462 [2024-07-10 14:45:04.716239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.462 [2024-07-10 14:45:04.721812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.462 [2024-07-10 14:45:04.721878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.462 [2024-07-10 14:45:04.721900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.462 [2024-07-10 14:45:04.726705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.462 [2024-07-10 14:45:04.726769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.462 [2024-07-10 14:45:04.726788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.462 [2024-07-10 14:45:04.730463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.462 [2024-07-10 14:45:04.730514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.462 [2024-07-10 14:45:04.730528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.462 [2024-07-10 14:45:04.735874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.462 [2024-07-10 14:45:04.735929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.462 [2024-07-10 14:45:04.735947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.462 [2024-07-10 14:45:04.740878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.462 [2024-07-10 14:45:04.740932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.462 [2024-07-10 14:45:04.740947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.462 [2024-07-10 14:45:04.744853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.462 [2024-07-10 14:45:04.744923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.462 [2024-07-10 14:45:04.744939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.749747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.749807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.749821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.755326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.755382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.755399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.760676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.760736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.760751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.765844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.765915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.765931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.769666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.769715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.769730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.774787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.774847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.774866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.780474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.780536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.780552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.784227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.784274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.784304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.789640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.789692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.789707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.795369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.795444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.795460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.799497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.799548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.799563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.804469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.804520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.804536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.809914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.809968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.809984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.814561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.814629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.814646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.818060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.818116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.818134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.823528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.823603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.823623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.829018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.829089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.722 [2024-07-10 14:45:04.829105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.722 [2024-07-10 14:45:04.833047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.722 [2024-07-10 14:45:04.833100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.833115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.837722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.837786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.837801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.842782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.842843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.842858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.847306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.847367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.847382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.851103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.851168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.851186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.856586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.856659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.856678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.860642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.860697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.860712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.866337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.866416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.866440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.871227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.871305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.871322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.875113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.875172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.875188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.880678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.880742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.880758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.885731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.885788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.885804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.890509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.890567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.890582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.895646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.895708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.895723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.899127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.899180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.899195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.904574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.904635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.904656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.909158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.909218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.909233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.914141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.914211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.914232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.919548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.919623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.919642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.923973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.924040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.924060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.929200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.929268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.929305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.934024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.934098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.934117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.938573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.938639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.938654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.943677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.943744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.943760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.948276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.948341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.948356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.954512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.954585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.954605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.958108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.958159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.958174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.962341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.962393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.962408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.967385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.967461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.723 [2024-07-10 14:45:04.967485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.723 [2024-07-10 14:45:04.972271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.723 [2024-07-10 14:45:04.972347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.724 [2024-07-10 14:45:04.972366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.724 [2024-07-10 14:45:04.977162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.724 [2024-07-10 14:45:04.977215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.724 [2024-07-10 14:45:04.977231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.724 [2024-07-10 14:45:04.982640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.724 [2024-07-10 14:45:04.982694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.724 [2024-07-10 14:45:04.982709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.724 [2024-07-10 14:45:04.988169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.724 [2024-07-10 14:45:04.988231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.724 [2024-07-10 14:45:04.988250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.724 [2024-07-10 14:45:04.993755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.724 [2024-07-10 14:45:04.993821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.724 [2024-07-10 14:45:04.993839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.724 [2024-07-10 14:45:04.997938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.724 [2024-07-10 14:45:04.998004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.724 [2024-07-10 14:45:04.998024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.724 [2024-07-10 14:45:05.003134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.724 [2024-07-10 14:45:05.003188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.724 [2024-07-10 14:45:05.003203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.724 [2024-07-10 14:45:05.008425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:52.724 [2024-07-10 14:45:05.008482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.724 [2024-07-10 14:45:05.008501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.012 [2024-07-10 14:45:05.013751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.012 [2024-07-10 14:45:05.013798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.012 [2024-07-10 14:45:05.013814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.012 [2024-07-10 14:45:05.018746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.012 [2024-07-10 14:45:05.018804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.012 [2024-07-10 14:45:05.018818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.012 [2024-07-10 14:45:05.023787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.012 [2024-07-10 14:45:05.023843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.012 [2024-07-10 14:45:05.023859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.012 [2024-07-10 14:45:05.029099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.012 [2024-07-10 14:45:05.029158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.012 [2024-07-10 14:45:05.029174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.012 [2024-07-10 14:45:05.033988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.012 [2024-07-10 14:45:05.034039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.012 [2024-07-10 14:45:05.034054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.012 [2024-07-10 14:45:05.038901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.012 [2024-07-10 14:45:05.038960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.038975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.043710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.043764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.043778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.049330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.049387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.049401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.053574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.053644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.053663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.059398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.059471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.059490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.063418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.063472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.063487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.069376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.069430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.069445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.073696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.073756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.073779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.078189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.078242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.078258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.083619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.083677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.083692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.088576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.088639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.088655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.093350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.093409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.093424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.098521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.098582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.098597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.103626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.103697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.103722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.109362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.109442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.109467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.115431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.115496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.115511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.121120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.121178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.121193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.124691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.124738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.124758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.130267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.130355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.130374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.135911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.135979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.135999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.140414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.140475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.140489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.145539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.145592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.145607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.151798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.151871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.151891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.156954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.157011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.157026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.162438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.162498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.162513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.167442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.167501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.167517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.172437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.172503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.172518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.177702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.177764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.177779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.183548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.013 [2024-07-10 14:45:05.183612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.013 [2024-07-10 14:45:05.183628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.013 [2024-07-10 14:45:05.188318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.188380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.188396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.192416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.192470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.192485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.197348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.197416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.197431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.203145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.203213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.203229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.208332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.208394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.208408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.213350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.213409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.213424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.217743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.217793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.217807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.222274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.222341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.222356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.227534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.227597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.227612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.232266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.232346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.232363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.236956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.237014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.237029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.242110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.242170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.242185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.247923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.247991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.248017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.252183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.252254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.252296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.257768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.257846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.257875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.262237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.262328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.262353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.267700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.267769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.267799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.272829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.272907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.272930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.277812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.277879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.277903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.282205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.282269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.282307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.014 [2024-07-10 14:45:05.287752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc50c0) 00:26:53.014 [2024-07-10 14:45:05.287821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.014 [2024-07-10 14:45:05.287847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.014 00:26:53.014 Latency(us) 00:26:53.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.014 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:53.014 nvme0n1 : 2.00 6546.27 818.28 0.00 0.00 2439.58 722.39 7804.74 00:26:53.014 =================================================================================================================== 00:26:53.014 Total : 6546.27 818.28 0.00 0.00 2439.58 722.39 7804.74 00:26:53.014 0 00:26:53.272 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:53.272 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:53.272 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:53.272 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:53.272 | .driver_specific 00:26:53.272 | .nvme_error 00:26:53.272 | .status_code 00:26:53.272 | .command_transient_transport_error' 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 422 > 0 )) 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112904 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112904 ']' 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112904 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112904 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:53.531 killing process with pid 112904 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112904' 00:26:53.531 Received shutdown signal, test time was about 2.000000 seconds 00:26:53.531 00:26:53.531 Latency(us) 00:26:53.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.531 =================================================================================================================== 00:26:53.531 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112904 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112904 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112971 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112971 /var/tmp/bperf.sock 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112971 ']' 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:53.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:53.531 14:45:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.531 [2024-07-10 14:45:05.812987] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:26:53.531 [2024-07-10 14:45:05.813130] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112971 ] 00:26:53.789 [2024-07-10 14:45:05.939711] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:53.789 [2024-07-10 14:45:05.954552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.789 [2024-07-10 14:45:05.998592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.047 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:54.047 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:54.047 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:54.047 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:54.305 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:54.305 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.305 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.305 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.305 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.305 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.871 nvme0n1 00:26:54.871 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:54.871 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.871 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.871 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.871 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:54.871 14:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:54.871 Running I/O for 2 seconds... 00:26:54.871 [2024-07-10 14:45:07.055646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f6458 00:26:54.871 [2024-07-10 14:45:07.056402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.871 [2024-07-10 14:45:07.056450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.871 [2024-07-10 14:45:07.069885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f9f68 00:26:54.871 [2024-07-10 14:45:07.070783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.871 [2024-07-10 14:45:07.070828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:54.871 [2024-07-10 14:45:07.081494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e9168 00:26:54.872 [2024-07-10 14:45:07.082353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.872 [2024-07-10 14:45:07.082396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:54.872 [2024-07-10 14:45:07.093152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f7538 00:26:54.872 [2024-07-10 14:45:07.093838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.872 [2024-07-10 14:45:07.093878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:54.872 [2024-07-10 14:45:07.105092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f35f0 00:26:54.872 [2024-07-10 14:45:07.106185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.872 [2024-07-10 14:45:07.106225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.872 [2024-07-10 14:45:07.119515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fa3a0 00:26:54.872 [2024-07-10 14:45:07.121273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.872 [2024-07-10 14:45:07.121328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:54.872 [2024-07-10 14:45:07.128046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fe2e8 00:26:54.872 [2024-07-10 14:45:07.128835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.872 [2024-07-10 14:45:07.128883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.872 [2024-07-10 14:45:07.142454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e5220 00:26:54.872 [2024-07-10 14:45:07.143910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.872 [2024-07-10 14:45:07.143951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:54.872 [2024-07-10 14:45:07.153628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ed0b0 00:26:54.872 [2024-07-10 14:45:07.154801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.872 [2024-07-10 14:45:07.154842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.165335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e3060 00:26:55.131 [2024-07-10 14:45:07.166512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.166551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.179778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fc560 00:26:55.131 [2024-07-10 14:45:07.181663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.181707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.192046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190de470 00:26:55.131 [2024-07-10 14:45:07.193904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.193945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.201951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ee5c8 00:26:55.131 [2024-07-10 14:45:07.202887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.202929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.214209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e2c28 00:26:55.131 [2024-07-10 14:45:07.215596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.215637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.225466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fe720 00:26:55.131 [2024-07-10 14:45:07.226581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.226624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.237363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e95a0 00:26:55.131 [2024-07-10 14:45:07.238475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.238517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.249589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190eff18 00:26:55.131 [2024-07-10 14:45:07.250180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.250223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.263082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e7c50 00:26:55.131 [2024-07-10 14:45:07.264505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.264546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.274586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f8e88 00:26:55.131 [2024-07-10 14:45:07.275851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.275889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.286137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e1f80 00:26:55.131 [2024-07-10 14:45:07.287189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.287230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.297362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e73e0 00:26:55.131 [2024-07-10 14:45:07.298213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.298251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.312225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f9b30 00:26:55.131 [2024-07-10 14:45:07.313692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.313735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.323628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f6cc8 00:26:55.131 [2024-07-10 14:45:07.324876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.324918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.335491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e84c0 00:26:55.131 [2024-07-10 14:45:07.336762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.336802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.350025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fe2e8 00:26:55.131 [2024-07-10 14:45:07.351980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.352025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.358628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f1430 00:26:55.131 [2024-07-10 14:45:07.359629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.359669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:55.131 [2024-07-10 14:45:07.370924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f31b8 00:26:55.131 [2024-07-10 14:45:07.371916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.131 [2024-07-10 14:45:07.371959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:55.132 [2024-07-10 14:45:07.382554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e84c0 00:26:55.132 [2024-07-10 14:45:07.383423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.132 [2024-07-10 14:45:07.383469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:55.132 [2024-07-10 14:45:07.397257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e73e0 00:26:55.132 [2024-07-10 14:45:07.398936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.132 [2024-07-10 14:45:07.398983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:55.132 [2024-07-10 14:45:07.408509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190de038 00:26:55.132 [2024-07-10 14:45:07.409933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.132 [2024-07-10 14:45:07.409978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:55.132 [2024-07-10 14:45:07.420342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f1ca0 00:26:55.391 [2024-07-10 14:45:07.421741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.421785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.432660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e7818 00:26:55.391 [2024-07-10 14:45:07.434045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.434088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.446420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e5658 00:26:55.391 [2024-07-10 14:45:07.448274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.448327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.455055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e5658 00:26:55.391 [2024-07-10 14:45:07.455962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.456002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.467377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e0a68 00:26:55.391 [2024-07-10 14:45:07.468266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.468317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.481112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f6458 00:26:55.391 [2024-07-10 14:45:07.482533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.482574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.492356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e49b0 00:26:55.391 [2024-07-10 14:45:07.493487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.493528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.504148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e88f8 00:26:55.391 [2024-07-10 14:45:07.505249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.505302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.518668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fc998 00:26:55.391 [2024-07-10 14:45:07.520452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.520494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.529439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f4b08 00:26:55.391 [2024-07-10 14:45:07.530393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.530439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.541073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e8d30 00:26:55.391 [2024-07-10 14:45:07.542203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.542245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.555060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ed4e8 00:26:55.391 [2024-07-10 14:45:07.556708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.556757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.566464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f6cc8 00:26:55.391 [2024-07-10 14:45:07.567139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.567186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.581107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f35f0 00:26:55.391 [2024-07-10 14:45:07.582894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.582936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.593686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f7970 00:26:55.391 [2024-07-10 14:45:07.595606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.595646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.602455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f8618 00:26:55.391 [2024-07-10 14:45:07.603423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.603470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.618131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e3498 00:26:55.391 [2024-07-10 14:45:07.619955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.620009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.629892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fe720 00:26:55.391 [2024-07-10 14:45:07.631486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.631543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.641826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e3d08 00:26:55.391 [2024-07-10 14:45:07.643151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.643205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.654098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ed4e8 00:26:55.391 [2024-07-10 14:45:07.654978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.655032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.665832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fa7d8 00:26:55.391 [2024-07-10 14:45:07.667149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.667203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:55.391 [2024-07-10 14:45:07.677652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e99d8 00:26:55.391 [2024-07-10 14:45:07.678864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.391 [2024-07-10 14:45:07.678921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.650 [2024-07-10 14:45:07.692572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e4578 00:26:55.650 [2024-07-10 14:45:07.694481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.650 [2024-07-10 14:45:07.694538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:55.650 [2024-07-10 14:45:07.701477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e4de8 00:26:55.650 [2024-07-10 14:45:07.702383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.650 [2024-07-10 14:45:07.702433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:55.650 [2024-07-10 14:45:07.716324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f6cc8 00:26:55.650 [2024-07-10 14:45:07.717934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.650 [2024-07-10 14:45:07.717989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.650 [2024-07-10 14:45:07.727856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e0a68 00:26:55.650 [2024-07-10 14:45:07.729320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.650 [2024-07-10 14:45:07.729382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:55.650 [2024-07-10 14:45:07.739850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ec408 00:26:55.650 [2024-07-10 14:45:07.741171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.650 [2024-07-10 14:45:07.741228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:55.650 [2024-07-10 14:45:07.754698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e1b48 00:26:55.650 [2024-07-10 14:45:07.756671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.650 [2024-07-10 14:45:07.756735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:55.650 [2024-07-10 14:45:07.763569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fcdd0 00:26:55.650 [2024-07-10 14:45:07.764556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.650 [2024-07-10 14:45:07.764608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:55.650 [2024-07-10 14:45:07.775945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e9e10 00:26:55.650 [2024-07-10 14:45:07.776994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.650 [2024-07-10 14:45:07.777047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:55.650 [2024-07-10 14:45:07.791569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fa7d8 00:26:55.650 [2024-07-10 14:45:07.793454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.651 [2024-07-10 14:45:07.793511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:55.651 [2024-07-10 14:45:07.800529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f6458 00:26:55.651 [2024-07-10 14:45:07.801407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.651 [2024-07-10 14:45:07.801459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:55.651 [2024-07-10 14:45:07.815441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fb480 00:26:55.651 [2024-07-10 14:45:07.816998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.651 [2024-07-10 14:45:07.817054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.651 [2024-07-10 14:45:07.827867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f9b30 00:26:55.651 [2024-07-10 14:45:07.828931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.651 [2024-07-10 14:45:07.828986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.651 [2024-07-10 14:45:07.838990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190eaef0 00:26:55.651 [2024-07-10 14:45:07.840215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.651 [2024-07-10 14:45:07.840270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.651 [2024-07-10 14:45:07.851967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e23b8 00:26:55.651 [2024-07-10 14:45:07.853089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.651 [2024-07-10 14:45:07.853144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.651 [2024-07-10 14:45:07.862902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f1ca0 00:26:55.651 [2024-07-10 14:45:07.864383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.651 [2024-07-10 14:45:07.864438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:55.651 [2024-07-10 14:45:07.874953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f7da8 00:26:55.651 [2024-07-10 14:45:07.876185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.651 [2024-07-10 14:45:07.876238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.651 [2024-07-10 14:45:07.889972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f4f40 00:26:55.651 [2024-07-10 14:45:07.891885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.651 [2024-07-10 14:45:07.891937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:55.651 [2024-07-10 14:45:07.898871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f6020 00:26:55.651 [2024-07-10 14:45:07.899804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.651 [2024-07-10 14:45:07.899848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:55.651 [2024-07-10 14:45:07.913533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f1430 00:26:55.651 [2024-07-10 14:45:07.915111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.651 [2024-07-10 14:45:07.915154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:55.651 [2024-07-10 14:45:07.924853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fc128 00:26:55.651 [2024-07-10 14:45:07.926179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.651 [2024-07-10 14:45:07.926224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:55.651 [2024-07-10 14:45:07.936648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fda78 00:26:55.651 [2024-07-10 14:45:07.937811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.651 [2024-07-10 14:45:07.937856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.909 [2024-07-10 14:45:07.948388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ec408 00:26:55.909 [2024-07-10 14:45:07.949665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.909 [2024-07-10 14:45:07.949714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:55.909 [2024-07-10 14:45:07.963575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f1868 00:26:55.910 [2024-07-10 14:45:07.964652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:07.964706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:07.981357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fef90 00:26:55.910 [2024-07-10 14:45:07.983252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:07.983318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:07.994211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190eff18 00:26:55.910 [2024-07-10 14:45:07.995035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:07.995084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.006552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fa7d8 00:26:55.910 [2024-07-10 14:45:08.007524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.007572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.017934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f5be8 00:26:55.910 [2024-07-10 14:45:08.018750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.018797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.033664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e9168 00:26:55.910 [2024-07-10 14:45:08.035637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.035700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.042395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ee5c8 00:26:55.910 [2024-07-10 14:45:08.043370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.043416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.056918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e3d08 00:26:55.910 [2024-07-10 14:45:08.058593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.058656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.068840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ecc78 00:26:55.910 [2024-07-10 14:45:08.070354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.070405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.080551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190eaef0 00:26:55.910 [2024-07-10 14:45:08.081915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.081963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.091829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f7da8 00:26:55.910 [2024-07-10 14:45:08.093083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.093131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.103749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f9b30 00:26:55.910 [2024-07-10 14:45:08.104826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.104885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.118342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f20d8 00:26:55.910 [2024-07-10 14:45:08.120068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.120119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.127036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ecc78 00:26:55.910 [2024-07-10 14:45:08.127827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.127877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.141744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ec408 00:26:55.910 [2024-07-10 14:45:08.143193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.143241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.153072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e6300 00:26:55.910 [2024-07-10 14:45:08.154331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.154376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.164774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e7818 00:26:55.910 [2024-07-10 14:45:08.165914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.165959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.179221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f5be8 00:26:55.910 [2024-07-10 14:45:08.181041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.181089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:55.910 [2024-07-10 14:45:08.187885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e95a0 00:26:55.910 [2024-07-10 14:45:08.188730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.910 [2024-07-10 14:45:08.188775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:56.169 [2024-07-10 14:45:08.202602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190eaab8 00:26:56.169 [2024-07-10 14:45:08.204135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.169 [2024-07-10 14:45:08.204187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:56.169 [2024-07-10 14:45:08.213985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190dece0 00:26:56.169 [2024-07-10 14:45:08.215312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.169 [2024-07-10 14:45:08.215358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:56.169 [2024-07-10 14:45:08.225757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190eaab8 00:26:56.169 [2024-07-10 14:45:08.227001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.169 [2024-07-10 14:45:08.227049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:56.169 [2024-07-10 14:45:08.240358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e95a0 00:26:56.169 [2024-07-10 14:45:08.242254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.169 [2024-07-10 14:45:08.242313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:56.169 [2024-07-10 14:45:08.249026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f5be8 00:26:56.169 [2024-07-10 14:45:08.249957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.169 [2024-07-10 14:45:08.250002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:56.169 [2024-07-10 14:45:08.263590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e7818 00:26:56.169 [2024-07-10 14:45:08.265206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.169 [2024-07-10 14:45:08.265254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:56.169 [2024-07-10 14:45:08.275061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ee5c8 00:26:56.169 [2024-07-10 14:45:08.276442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.169 [2024-07-10 14:45:08.276489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:56.169 [2024-07-10 14:45:08.287351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190eff18 00:26:56.169 [2024-07-10 14:45:08.288491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.169 [2024-07-10 14:45:08.288536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:56.170 [2024-07-10 14:45:08.298741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f0788 00:26:56.170 [2024-07-10 14:45:08.299747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.170 [2024-07-10 14:45:08.299795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:56.170 [2024-07-10 14:45:08.310368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fb480 00:26:56.170 [2024-07-10 14:45:08.311226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.170 [2024-07-10 14:45:08.311274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:56.170 [2024-07-10 14:45:08.324237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e1f80 00:26:56.170 [2024-07-10 14:45:08.325354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.170 [2024-07-10 14:45:08.325406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:56.170 [2024-07-10 14:45:08.335373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ef270 00:26:56.170 [2024-07-10 14:45:08.336773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.170 [2024-07-10 14:45:08.336824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:56.170 [2024-07-10 14:45:08.347314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e4de8 00:26:56.170 [2024-07-10 14:45:08.348500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.170 [2024-07-10 14:45:08.348549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:56.170 [2024-07-10 14:45:08.362131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190edd58 00:26:56.170 [2024-07-10 14:45:08.363973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.170 [2024-07-10 14:45:08.364020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:56.170 [2024-07-10 14:45:08.370873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190df118 00:26:56.170 [2024-07-10 14:45:08.371725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.170 [2024-07-10 14:45:08.371772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:56.170 [2024-07-10 14:45:08.385440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e6fa8 00:26:56.170 [2024-07-10 14:45:08.386822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.170 [2024-07-10 14:45:08.386871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:56.170 [2024-07-10 14:45:08.396812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f7da8 00:26:56.170 [2024-07-10 14:45:08.398014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.170 [2024-07-10 14:45:08.398063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:56.170 [2024-07-10 14:45:08.408370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e4de8 00:26:56.170 [2024-07-10 14:45:08.409473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.170 [2024-07-10 14:45:08.409525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:56.170 [2024-07-10 14:45:08.419996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190de038 00:26:56.170 [2024-07-10 14:45:08.420964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.170 [2024-07-10 14:45:08.421015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:56.170 [2024-07-10 14:45:08.433994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f2510 00:26:56.170 [2024-07-10 14:45:08.435077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.170 [2024-07-10 14:45:08.435127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.170 [2024-07-10 14:45:08.444961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190edd58 00:26:56.170 [2024-07-10 14:45:08.446188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.170 [2024-07-10 14:45:08.446239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.459659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e5a90 00:26:56.429 [2024-07-10 14:45:08.461542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.461589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.468375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ef270 00:26:56.429 [2024-07-10 14:45:08.469272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.469330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.482928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ed920 00:26:56.429 [2024-07-10 14:45:08.484514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.484561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.494236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f6cc8 00:26:56.429 [2024-07-10 14:45:08.495628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.495679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.506075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f46d0 00:26:56.429 [2024-07-10 14:45:08.507437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.507489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.521354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ddc00 00:26:56.429 [2024-07-10 14:45:08.523343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.523395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.530226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e8d30 00:26:56.429 [2024-07-10 14:45:08.531256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.531319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.545073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e0630 00:26:56.429 [2024-07-10 14:45:08.546803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.546849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.556601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f6020 00:26:56.429 [2024-07-10 14:45:08.558172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.558225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.568532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e4de8 00:26:56.429 [2024-07-10 14:45:08.569911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.569959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.580780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f7970 00:26:56.429 [2024-07-10 14:45:08.581697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.581747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.592300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f35f0 00:26:56.429 [2024-07-10 14:45:08.593073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.593125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.603141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190edd58 00:26:56.429 [2024-07-10 14:45:08.604057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.604106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.617731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190df118 00:26:56.429 [2024-07-10 14:45:08.619332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.619380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.630530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f0bc0 00:26:56.429 [2024-07-10 14:45:08.632115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.632165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.640715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fef90 00:26:56.429 [2024-07-10 14:45:08.641373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.641425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.653116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f0350 00:26:56.429 [2024-07-10 14:45:08.654058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.654107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.664558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e0a68 00:26:56.429 [2024-07-10 14:45:08.665343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.665392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.679662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190df550 00:26:56.429 [2024-07-10 14:45:08.681426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.681478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:56.429 [2024-07-10 14:45:08.688505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190eaab8 00:26:56.429 [2024-07-10 14:45:08.689455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.429 [2024-07-10 14:45:08.689499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:56.430 [2024-07-10 14:45:08.702969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e5ec8 00:26:56.430 [2024-07-10 14:45:08.704594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.430 [2024-07-10 14:45:08.704642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:56.430 [2024-07-10 14:45:08.714311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ed920 00:26:56.430 [2024-07-10 14:45:08.715892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.430 [2024-07-10 14:45:08.715942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:56.688 [2024-07-10 14:45:08.726221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fda78 00:26:56.688 [2024-07-10 14:45:08.727574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.688 [2024-07-10 14:45:08.727625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:56.688 [2024-07-10 14:45:08.741045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ebb98 00:26:56.688 [2024-07-10 14:45:08.743114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.688 [2024-07-10 14:45:08.743171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:56.688 [2024-07-10 14:45:08.750123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e4de8 00:26:56.688 [2024-07-10 14:45:08.751208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.688 [2024-07-10 14:45:08.751271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:56.688 [2024-07-10 14:45:08.765436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f7538 00:26:56.688 [2024-07-10 14:45:08.767163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.688 [2024-07-10 14:45:08.767217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:56.688 [2024-07-10 14:45:08.777011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f20d8 00:26:56.688 [2024-07-10 14:45:08.778625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.688 [2024-07-10 14:45:08.778679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.688 [2024-07-10 14:45:08.789267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e6300 00:26:56.688 [2024-07-10 14:45:08.790744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.688 [2024-07-10 14:45:08.790795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:56.688 [2024-07-10 14:45:08.802031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fa7d8 00:26:56.688 [2024-07-10 14:45:08.803516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.688 [2024-07-10 14:45:08.803567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:56.689 [2024-07-10 14:45:08.814013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190eaab8 00:26:56.689 [2024-07-10 14:45:08.815617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.689 [2024-07-10 14:45:08.815669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:56.689 [2024-07-10 14:45:08.826489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e5220 00:26:56.689 [2024-07-10 14:45:08.827804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.689 [2024-07-10 14:45:08.827855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:56.689 [2024-07-10 14:45:08.841597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f1430 00:26:56.689 [2024-07-10 14:45:08.843582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.689 [2024-07-10 14:45:08.843636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:56.689 [2024-07-10 14:45:08.850498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f4298 00:26:56.689 [2024-07-10 14:45:08.851537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.689 [2024-07-10 14:45:08.851588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:56.689 [2024-07-10 14:45:08.862938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fdeb0 00:26:56.689 [2024-07-10 14:45:08.864004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.689 [2024-07-10 14:45:08.864064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:56.689 [2024-07-10 14:45:08.875303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fe720 00:26:56.689 [2024-07-10 14:45:08.876347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.689 [2024-07-10 14:45:08.876399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:56.689 [2024-07-10 14:45:08.890213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ed4e8 00:26:56.689 [2024-07-10 14:45:08.892039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.689 [2024-07-10 14:45:08.892093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:56.689 [2024-07-10 14:45:08.899053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f0788 00:26:56.689 [2024-07-10 14:45:08.899924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.689 [2024-07-10 14:45:08.899976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:56.689 [2024-07-10 14:45:08.914325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e0ea0 00:26:56.689 [2024-07-10 14:45:08.915865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.689 [2024-07-10 14:45:08.915925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:56.689 [2024-07-10 14:45:08.926111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190fe720 00:26:56.689 [2024-07-10 14:45:08.927714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.689 [2024-07-10 14:45:08.927769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:56.689 [2024-07-10 14:45:08.939092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f8618 00:26:56.689 [2024-07-10 14:45:08.940411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.689 [2024-07-10 14:45:08.940461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:56.689 [2024-07-10 14:45:08.954839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190eb760 00:26:56.689 [2024-07-10 14:45:08.956866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.689 [2024-07-10 14:45:08.956927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:56.689 [2024-07-10 14:45:08.964265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e99d8 00:26:56.689 [2024-07-10 14:45:08.965272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.689 [2024-07-10 14:45:08.965332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:56.947 [2024-07-10 14:45:08.986642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e0ea0 00:26:56.947 [2024-07-10 14:45:08.988756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.947 [2024-07-10 14:45:08.988827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:56.947 [2024-07-10 14:45:08.996204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190eee38 00:26:56.947 [2024-07-10 14:45:08.997339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.947 [2024-07-10 14:45:08.997413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:56.947 [2024-07-10 14:45:09.013179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190e49b0 00:26:56.947 [2024-07-10 14:45:09.014859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.947 [2024-07-10 14:45:09.014910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:56.947 [2024-07-10 14:45:09.024842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190f2d80 00:26:56.947 [2024-07-10 14:45:09.026348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.947 [2024-07-10 14:45:09.026416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:56.947 [2024-07-10 14:45:09.036947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15b52e0) with pdu=0x2000190ea680 00:26:56.947 [2024-07-10 14:45:09.038274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.947 [2024-07-10 14:45:09.038340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:56.947 00:26:56.947 Latency(us) 00:26:56.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.947 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:56.947 nvme0n1 : 2.01 20577.71 80.38 0.00 0.00 6213.54 2517.18 19065.02 00:26:56.947 =================================================================================================================== 00:26:56.947 Total : 20577.71 80.38 0.00 0.00 6213.54 2517.18 19065.02 00:26:56.947 0 00:26:56.947 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:56.947 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:56.948 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:56.948 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:56.948 | .driver_specific 00:26:56.948 | .nvme_error 00:26:56.948 | .status_code 00:26:56.948 | .command_transient_transport_error' 00:26:57.206 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:26:57.206 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112971 00:26:57.206 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112971 ']' 00:26:57.206 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112971 00:26:57.206 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:57.206 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:57.206 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112971 00:26:57.206 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:57.206 killing process with pid 112971 00:26:57.206 Received shutdown signal, test time was about 2.000000 seconds 00:26:57.206 00:26:57.206 Latency(us) 00:26:57.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.206 =================================================================================================================== 00:26:57.206 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:57.206 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:57.206 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112971' 00:26:57.206 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112971 00:26:57.206 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112971 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=113048 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 113048 /var/tmp/bperf.sock 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 113048 ']' 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:57.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:57.465 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.465 [2024-07-10 14:45:09.621413] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:26:57.465 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:57.465 Zero copy mechanism will not be used. 00:26:57.465 [2024-07-10 14:45:09.621522] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113048 ] 00:26:57.465 [2024-07-10 14:45:09.741559] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:57.465 [2024-07-10 14:45:09.755393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.724 [2024-07-10 14:45:09.800313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.724 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.724 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:57.724 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:57.724 14:45:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:57.982 14:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:57.982 14:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.982 14:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.982 14:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.982 14:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.982 14:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.548 nvme0n1 00:26:58.548 14:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:58.548 14:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.548 14:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:58.548 14:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.548 14:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:58.548 14:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:58.548 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.548 Zero copy mechanism will not be used. 00:26:58.548 Running I/O for 2 seconds... 00:26:58.548 [2024-07-10 14:45:10.766623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.548 [2024-07-10 14:45:10.767051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.548 [2024-07-10 14:45:10.767085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.548 [2024-07-10 14:45:10.774136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.548 [2024-07-10 14:45:10.774573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.548 [2024-07-10 14:45:10.774617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.548 [2024-07-10 14:45:10.781405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.548 [2024-07-10 14:45:10.781785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.548 [2024-07-10 14:45:10.781828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.548 [2024-07-10 14:45:10.786977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.548 [2024-07-10 14:45:10.787395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.548 [2024-07-10 14:45:10.787439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.548 [2024-07-10 14:45:10.792821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.548 [2024-07-10 14:45:10.793222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.548 [2024-07-10 14:45:10.793264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.548 [2024-07-10 14:45:10.798554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.548 [2024-07-10 14:45:10.798937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.548 [2024-07-10 14:45:10.798980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.548 [2024-07-10 14:45:10.804486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.548 [2024-07-10 14:45:10.804843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.548 [2024-07-10 14:45:10.804901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.548 [2024-07-10 14:45:10.811777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.548 [2024-07-10 14:45:10.812355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.548 [2024-07-10 14:45:10.812424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.548 [2024-07-10 14:45:10.819347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.548 [2024-07-10 14:45:10.819807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.548 [2024-07-10 14:45:10.819856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.548 [2024-07-10 14:45:10.825509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.548 [2024-07-10 14:45:10.825854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.548 [2024-07-10 14:45:10.825896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.548 [2024-07-10 14:45:10.831830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.548 [2024-07-10 14:45:10.832246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.548 [2024-07-10 14:45:10.832301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.808 [2024-07-10 14:45:10.839682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.808 [2024-07-10 14:45:10.840091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-07-10 14:45:10.840134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.808 [2024-07-10 14:45:10.846968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.808 [2024-07-10 14:45:10.847375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-07-10 14:45:10.847417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.808 [2024-07-10 14:45:10.854092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.808 [2024-07-10 14:45:10.854543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-07-10 14:45:10.854603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.808 [2024-07-10 14:45:10.860773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.808 [2024-07-10 14:45:10.861201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-07-10 14:45:10.861248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.808 [2024-07-10 14:45:10.867324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.808 [2024-07-10 14:45:10.867710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-07-10 14:45:10.867759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.808 [2024-07-10 14:45:10.873225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.808 [2024-07-10 14:45:10.873600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-07-10 14:45:10.873649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.808 [2024-07-10 14:45:10.878708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.808 [2024-07-10 14:45:10.879032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.879072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.884150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.884490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.884530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.889766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.890092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.890132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.895236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.895604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.895650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.900911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.901270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.901329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.906456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.906814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.906861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.912084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.912452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.912496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.918159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.918534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.918579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.923777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.924122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.924168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.929490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.929833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.929877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.935069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.935492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.935541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.940806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.941185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.941232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.946531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.946898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.946944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.952442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.952814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.952877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.958133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.958488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.958532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.963861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.964203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.964248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.969258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.969597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.969641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.974521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.974842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.974882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.979760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.980091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.980136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.985038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.985370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.985410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.990328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.990657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.990700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:10.995589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:10.995911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:10.995957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:11.000835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:11.001175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:11.001220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:11.006141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:11.006504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:11.006549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:11.011462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:11.011817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:11.011863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:11.016822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:11.017219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:11.017259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:11.022501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:11.022873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:11.022923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:11.028813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:11.029214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:11.029263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:11.034415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:11.034753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-07-10 14:45:11.034800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.809 [2024-07-10 14:45:11.039925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.809 [2024-07-10 14:45:11.040310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.810 [2024-07-10 14:45:11.040355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.810 [2024-07-10 14:45:11.045565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.810 [2024-07-10 14:45:11.045924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.810 [2024-07-10 14:45:11.045974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.810 [2024-07-10 14:45:11.051582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.810 [2024-07-10 14:45:11.051955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.810 [2024-07-10 14:45:11.052003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.810 [2024-07-10 14:45:11.057741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.810 [2024-07-10 14:45:11.058104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.810 [2024-07-10 14:45:11.058151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.810 [2024-07-10 14:45:11.063756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.810 [2024-07-10 14:45:11.064115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.810 [2024-07-10 14:45:11.064168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.810 [2024-07-10 14:45:11.069659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.810 [2024-07-10 14:45:11.070069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.810 [2024-07-10 14:45:11.070112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.810 [2024-07-10 14:45:11.076176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.810 [2024-07-10 14:45:11.076568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.810 [2024-07-10 14:45:11.076619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.810 [2024-07-10 14:45:11.082089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.810 [2024-07-10 14:45:11.082434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.810 [2024-07-10 14:45:11.082476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.810 [2024-07-10 14:45:11.087948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.810 [2024-07-10 14:45:11.088406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.810 [2024-07-10 14:45:11.088461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.810 [2024-07-10 14:45:11.093098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:58.810 [2024-07-10 14:45:11.093421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.810 [2024-07-10 14:45:11.093463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.070 [2024-07-10 14:45:11.097954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.098215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.098255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.102771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.103031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.103072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.108192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.108503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.108559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.113881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.114154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.114195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.119184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.119442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.119481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.123948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.124189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.124230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.128613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.128877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.128919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.133327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.133562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.133601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.138015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.138243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.138294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.142686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.142919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.142957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.147380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.147623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.147660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.152171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.152427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.152465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.156897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.157130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.157168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.161963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.162215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.162260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.166875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.167110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.167154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.171665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.171902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.171943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.176487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.176719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.176761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.181425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.181689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.181732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.186276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.186529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.186569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.191517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.191755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.191797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.196235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.196480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.196521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.200957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.201200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.201244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.205715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.205953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.205996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.210312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.210654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.210706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.215028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.215236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.215311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.219694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.219884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.219914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.225274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.225511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-07-10 14:45:11.225553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.071 [2024-07-10 14:45:11.229939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.071 [2024-07-10 14:45:11.230164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.230209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.234708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.234913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.234951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.239340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.239617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.239669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.244050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.244398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.244462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.248690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.248884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.248923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.253378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.253614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.253646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.258039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.258201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.258227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.262766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.262949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.262975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.267435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.267602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.267627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.272059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.272205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.272231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.276764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.276937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.276964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.281397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.281547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.281574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.286146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.286322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.286349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.290833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.290996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.291022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.296323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.296554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.296584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.301905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.302146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.302190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.307049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.307263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.307327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.312712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.312960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.313002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.318492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.318729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.318771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.325401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.325654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.325701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.331707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.331892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.331947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.336517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.336696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.336738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.341171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.341358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.341397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.345864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.346040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.346093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.350620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.350834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.350886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.072 [2024-07-10 14:45:11.355368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.072 [2024-07-10 14:45:11.355574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-07-10 14:45:11.355628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.360534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.360712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.360764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.365481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.365722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.365775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.370309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.370489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.370543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.375223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.375463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.375515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.380267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.380521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.380572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.385670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.385868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.385904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.391262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.391462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.391496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.397015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.397241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.397277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.402390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.402598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.402633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.408014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.408200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.408243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.413649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.413851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.413883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.419195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.419420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.419455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.424750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.424973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.425007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.430230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.430441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.430473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.435907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.436158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.436207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.441467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.441717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.441758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.447687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.447891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.447921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.452469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.452654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.452685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.457229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.457400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.457428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.461906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.462083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.462108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.466621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.466770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.466795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.471252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.471427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.471453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.475977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.476130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.476155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.480680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.480877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.480903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.485417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.485604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.485630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.490105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.490272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.490314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.494807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.494986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.495011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.333 [2024-07-10 14:45:11.499470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.333 [2024-07-10 14:45:11.499642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.333 [2024-07-10 14:45:11.499668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.504248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.504440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.504467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.508925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.509115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.509142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.513644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.513825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.513851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.518366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.518516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.518541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.522996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.523143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.523168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.527689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.527849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.527874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.532425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.532606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.532642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.537149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.537354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.537381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.541809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.541983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.542010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.546585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.546744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.546772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.551307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.551476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.551504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.556072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.556256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.556297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.561086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.561270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.561314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.566637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.566826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.566860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.572185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.572427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.572468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.577626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.577816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.577847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.583029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.583236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.583297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.587851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.588043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.588085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.592797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.592998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.593031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.597779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.597938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.597964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.603124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.603313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.603340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.608184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.608415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.608458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.613726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.334 [2024-07-10 14:45:11.613938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-07-10 14:45:11.613997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.334 [2024-07-10 14:45:11.619353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.627052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.627227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.632416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.632593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.632630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.638012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.638224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.638268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.643679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.643889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.643929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.649409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.649614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.649647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.654639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.654816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.654846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.659844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.660008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.660039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.664568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.664727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.664770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.669248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.669438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.669480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.673966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.674164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.674194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.679004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.679323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.679376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.684439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.684666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.684695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.689626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.689799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.689833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.694582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.694804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.694834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.699876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.700077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.700104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.705125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.705319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.705357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.710358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.710554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.710582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.715396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.715586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.715614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.720565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.720771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.720799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.725578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.725776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.725803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.730666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.730842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.730869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.735723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.735921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.735949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.740800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.741020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.741047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.745827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.746038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.746064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.750884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.751063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.751089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.755856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.756026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.756062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.760911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.761100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.761125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.766101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.766325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.766352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.771248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.771506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.771536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.594 [2024-07-10 14:45:11.776494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.594 [2024-07-10 14:45:11.776682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.594 [2024-07-10 14:45:11.776716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.781742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.781970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.782000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.786863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.787104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.787133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.792066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.792262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.792310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.797344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.797516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.797544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.802407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.802620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.802649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.807619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.807846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.807875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.812819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.813055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.813084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.817857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.818036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.818072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.823053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.823220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.823246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.828063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.828246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.828297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.833196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.833407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.833434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.838245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.838459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.838488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.843418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.843631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.843660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.848606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.848853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.848907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.853936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.854177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.854207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.859052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.859291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.859322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.864138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.864333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.864361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.869319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.869521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.869558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.874386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.874553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.874587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.595 [2024-07-10 14:45:11.879713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.595 [2024-07-10 14:45:11.879921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.595 [2024-07-10 14:45:11.879950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.884812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.885044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.885075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.889971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.890154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.890182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.895008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.895194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.895220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.899982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.900157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.900190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.905089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.905299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.905329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.910439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.910694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.910723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.915579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.915780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.915808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.920606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.920779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.920815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.925596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.925778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.925815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.930690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.930866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.930890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.935949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.936134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.936174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.941044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.941233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.941258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.946122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.946337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.946372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.951218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.951416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.951451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.956328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.956510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.853 [2024-07-10 14:45:11.956538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.853 [2024-07-10 14:45:11.961484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.853 [2024-07-10 14:45:11.961669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:11.961696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:11.966583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:11.966803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:11.966840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:11.972034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:11.972218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:11.972254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:11.976819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:11.977016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:11.977043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:11.981488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:11.981670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:11.981705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:11.986201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:11.986379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:11.986405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:11.990928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:11.991078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:11.991103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:11.995605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:11.995756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:11.995781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.000304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.000480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.000504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.005028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.005176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.005201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.009674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.009825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.009849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.014401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.014567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.014593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.019024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.019191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.019216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.023708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.023877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.023901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.028345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.028507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.028540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.033173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.033349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.033379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.037853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.037999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.038025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.042579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.042764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.042799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.047261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.047442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.047476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.051932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.052100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.052134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.056629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.056784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.056811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.061296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.061466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.061500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.065950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.066129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.066153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.070620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.070769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.070793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.075304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.075481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.075504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.080046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.080203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.080226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.084672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.854 [2024-07-10 14:45:12.084845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-07-10 14:45:12.084881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.854 [2024-07-10 14:45:12.089389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.855 [2024-07-10 14:45:12.089546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.855 [2024-07-10 14:45:12.089570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.855 [2024-07-10 14:45:12.094057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.855 [2024-07-10 14:45:12.094225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.855 [2024-07-10 14:45:12.094249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.855 [2024-07-10 14:45:12.098768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.855 [2024-07-10 14:45:12.098916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.855 [2024-07-10 14:45:12.098939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.855 [2024-07-10 14:45:12.103460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.855 [2024-07-10 14:45:12.103627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.855 [2024-07-10 14:45:12.103651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.855 [2024-07-10 14:45:12.108186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.855 [2024-07-10 14:45:12.108352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.855 [2024-07-10 14:45:12.108376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.855 [2024-07-10 14:45:12.112823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.855 [2024-07-10 14:45:12.112985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.855 [2024-07-10 14:45:12.113009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.855 [2024-07-10 14:45:12.117511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.855 [2024-07-10 14:45:12.117654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.855 [2024-07-10 14:45:12.117677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.855 [2024-07-10 14:45:12.122175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.855 [2024-07-10 14:45:12.122343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.855 [2024-07-10 14:45:12.122368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.855 [2024-07-10 14:45:12.126851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.855 [2024-07-10 14:45:12.127016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.855 [2024-07-10 14:45:12.127039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.855 [2024-07-10 14:45:12.131610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.855 [2024-07-10 14:45:12.131753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.855 [2024-07-10 14:45:12.131776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.855 [2024-07-10 14:45:12.136217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.855 [2024-07-10 14:45:12.136389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.855 [2024-07-10 14:45:12.136413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.855 [2024-07-10 14:45:12.140899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:26:59.855 [2024-07-10 14:45:12.141054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.855 [2024-07-10 14:45:12.141077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.112 [2024-07-10 14:45:12.145598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.112 [2024-07-10 14:45:12.145766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.112 [2024-07-10 14:45:12.145789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.112 [2024-07-10 14:45:12.150510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.112 [2024-07-10 14:45:12.150681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.112 [2024-07-10 14:45:12.150704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.112 [2024-07-10 14:45:12.155245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.112 [2024-07-10 14:45:12.155420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.112 [2024-07-10 14:45:12.155449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.112 [2024-07-10 14:45:12.160061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.112 [2024-07-10 14:45:12.160238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.112 [2024-07-10 14:45:12.160261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.112 [2024-07-10 14:45:12.164717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.112 [2024-07-10 14:45:12.164883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.112 [2024-07-10 14:45:12.164907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.112 [2024-07-10 14:45:12.169472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.112 [2024-07-10 14:45:12.169634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.112 [2024-07-10 14:45:12.169659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.112 [2024-07-10 14:45:12.174184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.112 [2024-07-10 14:45:12.174386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.112 [2024-07-10 14:45:12.174410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.112 [2024-07-10 14:45:12.178877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.112 [2024-07-10 14:45:12.179024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.112 [2024-07-10 14:45:12.179047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.112 [2024-07-10 14:45:12.183590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.112 [2024-07-10 14:45:12.183738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.112 [2024-07-10 14:45:12.183762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.112 [2024-07-10 14:45:12.188243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.112 [2024-07-10 14:45:12.188406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.112 [2024-07-10 14:45:12.188431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.112 [2024-07-10 14:45:12.192960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.112 [2024-07-10 14:45:12.193105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.112 [2024-07-10 14:45:12.193128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.197597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.197742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.197765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.202229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.202400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.202424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.206884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.207037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.207059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.211603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.211776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.211799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.216258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.216442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.216468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.220922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.221090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.221115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.225602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.225747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.225773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.230266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.230458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.230485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.235019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.235182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.235215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.239690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.239843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.239869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.244392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.244546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.244572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.249043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.249196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.249223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.253725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.253900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.253926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.258399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.258571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.258596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.263054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.263225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.263249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.267732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.267901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.267925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.272415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.272573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.272597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.277025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.277183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.277207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.281706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.281871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.281896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.286460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.286630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.286654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.291062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.291232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.291256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.295733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.295907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.295931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.300409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.300552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.300576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.305125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.305272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.305309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.309780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.309955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.309978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.314470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.314625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.314648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.319191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.319351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.319375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.323830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.323995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.324020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.113 [2024-07-10 14:45:12.328509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.113 [2024-07-10 14:45:12.328672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.113 [2024-07-10 14:45:12.328699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.333177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.333358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.333383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.337881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.338051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.338076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.342596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.342750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.342775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.347195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.347357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.347381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.351955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.352108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.352132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.356737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.356903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.356927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.361583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.361740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.361764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.366338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.366495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.366519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.370933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.371087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.371119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.375605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.375763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.375788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.380300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.380467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.380491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.385000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.385154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.385176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.389684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.389858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.389899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.394443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.394590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.394613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.114 [2024-07-10 14:45:12.399093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.114 [2024-07-10 14:45:12.399251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-07-10 14:45:12.399275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.403759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.403901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.403924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.408418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.408563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.408585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.413091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.413236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.413266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.417709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.417863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.417887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.422394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.422542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.422566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.427066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.427208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.427232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.431727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.431893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.431919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.436427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.436572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.436596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.441032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.441178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.441209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.445722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.445868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.445892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.450402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.450557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.450580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.455093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.455239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.455262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.459711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.459882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.459907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.464470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.464629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.464654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.469128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.469310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.469335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.473766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.473918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.473944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.478486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.478640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.478665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.483171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.483352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.483381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.487844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.487999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.488026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.492619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.492788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.492815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.497386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.497568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.497594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.502071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.502248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.502272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.506765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.506915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.506941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.511447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.511593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.511618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.516125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.516317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.516342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.520813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.520983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.521009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.525526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.525673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.525697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.373 [2024-07-10 14:45:12.530075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.373 [2024-07-10 14:45:12.530226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.373 [2024-07-10 14:45:12.530251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.534819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.534998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.535022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.539469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.539641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.539665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.544195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.544366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.544392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.548882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.549037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.549061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.553614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.553782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.553806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.558420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.558591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.558615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.563264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.563423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.563447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.567952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.568096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.568121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.572676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.572824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.572847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.577367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.577520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.577545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.582008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.582152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.582176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.586703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.586859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.586883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.591371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.591538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.591561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.596112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.596259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.596295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.600807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.600975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.600998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.605539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.605695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.605719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.610226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.610396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.610423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.614919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.615088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.615113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.619554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.619730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.619755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.624187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.624368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.624393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.628903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.629050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.629074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.633560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.633715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.633739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.638235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.638418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.638443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.642935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.643086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.643112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.647649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.647795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.647820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.652324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.652495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.652521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.656990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.657137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.657160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.374 [2024-07-10 14:45:12.661632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.374 [2024-07-10 14:45:12.661777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.374 [2024-07-10 14:45:12.661801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.632 [2024-07-10 14:45:12.666350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.632 [2024-07-10 14:45:12.666523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-07-10 14:45:12.666547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.632 [2024-07-10 14:45:12.671012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.632 [2024-07-10 14:45:12.671181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-07-10 14:45:12.671205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.632 [2024-07-10 14:45:12.675690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.632 [2024-07-10 14:45:12.675870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-07-10 14:45:12.675895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.632 [2024-07-10 14:45:12.680406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.632 [2024-07-10 14:45:12.680551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-07-10 14:45:12.680575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.632 [2024-07-10 14:45:12.685056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.632 [2024-07-10 14:45:12.685202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-07-10 14:45:12.685227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.632 [2024-07-10 14:45:12.689763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.632 [2024-07-10 14:45:12.689909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-07-10 14:45:12.689933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.632 [2024-07-10 14:45:12.694357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.632 [2024-07-10 14:45:12.694529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-07-10 14:45:12.694552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.632 [2024-07-10 14:45:12.699060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.632 [2024-07-10 14:45:12.699208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-07-10 14:45:12.699238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.632 [2024-07-10 14:45:12.703784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.632 [2024-07-10 14:45:12.703945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-07-10 14:45:12.703972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.632 [2024-07-10 14:45:12.708513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.632 [2024-07-10 14:45:12.708684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-07-10 14:45:12.708710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.632 [2024-07-10 14:45:12.713157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.632 [2024-07-10 14:45:12.713336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-07-10 14:45:12.713362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.632 [2024-07-10 14:45:12.717891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.632 [2024-07-10 14:45:12.718035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-07-10 14:45:12.718059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.632 [2024-07-10 14:45:12.722566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.633 [2024-07-10 14:45:12.722711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.633 [2024-07-10 14:45:12.722734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.633 [2024-07-10 14:45:12.727301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.633 [2024-07-10 14:45:12.727448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.633 [2024-07-10 14:45:12.727472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.633 [2024-07-10 14:45:12.731967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.633 [2024-07-10 14:45:12.732114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.633 [2024-07-10 14:45:12.732138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.633 [2024-07-10 14:45:12.736658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.633 [2024-07-10 14:45:12.736805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.633 [2024-07-10 14:45:12.736830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.633 [2024-07-10 14:45:12.741298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.633 [2024-07-10 14:45:12.741452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.633 [2024-07-10 14:45:12.741476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.633 [2024-07-10 14:45:12.745990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.633 [2024-07-10 14:45:12.746170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.633 [2024-07-10 14:45:12.746195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.633 [2024-07-10 14:45:12.750713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ec050) with pdu=0x2000190fef90 00:27:00.633 [2024-07-10 14:45:12.750856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.633 [2024-07-10 14:45:12.750881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.633 00:27:00.633 Latency(us) 00:27:00.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.633 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:00.633 nvme0n1 : 2.00 6109.85 763.73 0.00 0.00 2612.41 1995.87 13464.67 00:27:00.633 =================================================================================================================== 00:27:00.633 Total : 6109.85 763.73 0.00 0.00 2612.41 1995.87 13464.67 00:27:00.633 0 00:27:00.633 14:45:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:00.633 14:45:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:00.633 14:45:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:00.633 14:45:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:00.633 | .driver_specific 00:27:00.633 | .nvme_error 00:27:00.633 | .status_code 00:27:00.633 | .command_transient_transport_error' 00:27:00.890 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 394 > 0 )) 00:27:00.890 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 113048 00:27:00.890 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 113048 ']' 00:27:00.890 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 113048 00:27:00.890 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:00.890 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:00.890 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113048 00:27:00.890 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:00.890 killing process with pid 113048 00:27:00.890 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:00.890 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113048' 00:27:00.890 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 113048 00:27:00.890 Received shutdown signal, test time was about 2.000000 seconds 00:27:00.890 00:27:00.890 Latency(us) 00:27:00.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.890 =================================================================================================================== 00:27:00.890 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.890 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 113048 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 112784 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112784 ']' 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112784 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112784 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:01.146 killing process with pid 112784 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112784' 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112784 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112784 00:27:01.146 00:27:01.146 real 0m15.776s 00:27:01.146 user 0m30.571s 00:27:01.146 sys 0m4.369s 00:27:01.146 ************************************ 00:27:01.146 END TEST nvmf_digest_error 00:27:01.146 ************************************ 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.146 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.440 rmmod nvme_tcp 00:27:01.440 rmmod nvme_fabrics 00:27:01.440 rmmod nvme_keyring 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 112784 ']' 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 112784 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 112784 ']' 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 112784 00:27:01.440 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (112784) - No such process 00:27:01.440 Process with pid 112784 is not found 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 112784 is not found' 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:01.440 00:27:01.440 real 0m33.439s 00:27:01.440 user 1m3.453s 00:27:01.440 sys 0m8.849s 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.440 ************************************ 00:27:01.440 END TEST nvmf_digest 00:27:01.440 ************************************ 00:27:01.440 14:45:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:01.440 14:45:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:01.440 14:45:13 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:27:01.440 14:45:13 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:27:01.440 14:45:13 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:01.440 14:45:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:01.440 14:45:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.440 14:45:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.440 ************************************ 00:27:01.440 START TEST nvmf_mdns_discovery 00:27:01.440 ************************************ 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:01.440 * Looking for test storage... 00:27:01.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.440 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:01.441 Cannot find device "nvmf_tgt_br" 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:27:01.441 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:01.699 Cannot find device "nvmf_tgt_br2" 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:01.699 Cannot find device "nvmf_tgt_br" 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:01.699 Cannot find device "nvmf_tgt_br2" 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:01.699 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:01.699 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:01.699 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:01.957 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:01.957 14:45:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:01.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:27:01.958 00:27:01.958 --- 10.0.0.2 ping statistics --- 00:27:01.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.958 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:01.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:01.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:27:01.958 00:27:01.958 --- 10.0.0.3 ping statistics --- 00:27:01.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.958 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:01.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:27:01.958 00:27:01.958 --- 10.0.0.1 ping statistics --- 00:27:01.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.958 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=113319 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 113319 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 113319 ']' 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:01.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:01.958 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.958 [2024-07-10 14:45:14.099840] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:27:01.958 [2024-07-10 14:45:14.099929] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.958 [2024-07-10 14:45:14.219725] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:01.958 [2024-07-10 14:45:14.238974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.216 [2024-07-10 14:45:14.274674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.216 [2024-07-10 14:45:14.274732] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.216 [2024-07-10 14:45:14.274744] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.216 [2024-07-10 14:45:14.274752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.216 [2024-07-10 14:45:14.274759] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.216 [2024-07-10 14:45:14.274786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.216 [2024-07-10 14:45:14.414754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.216 [2024-07-10 14:45:14.426878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.216 null0 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.216 null1 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.216 null2 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.216 null3 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=113360 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 113360 /tmp/host.sock 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 113360 ']' 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:02.216 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:02.216 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.473 [2024-07-10 14:45:14.534041] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:27:02.473 [2024-07-10 14:45:14.534138] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113360 ] 00:27:02.473 [2024-07-10 14:45:14.661511] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:02.473 [2024-07-10 14:45:14.677989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.473 [2024-07-10 14:45:14.715051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.731 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:02.731 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:27:02.731 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:27:02.731 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:27:02.731 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:27:02.731 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=113371 00:27:02.731 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:27:02.731 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:27:02.731 14:45:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:27:02.731 Process 979 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:27:02.731 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:27:02.731 Successfully dropped root privileges. 00:27:02.731 avahi-daemon 0.8 starting up. 00:27:02.731 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:27:02.731 Successfully called chroot(). 00:27:02.731 Successfully dropped remaining capabilities. 00:27:02.731 No service file found in /etc/avahi/services. 00:27:02.731 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:02.731 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:27:02.731 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:02.731 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:27:02.731 Network interface enumeration completed. 00:27:02.731 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:27:02.731 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:27:02.731 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:27:02.731 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:27:03.665 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 296657664. 00:27:03.924 14:45:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:03.924 14:45:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.924 14:45:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.924 14:45:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.924 14:45:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:03.924 14:45:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.924 14:45:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:03.924 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.182 [2024-07-10 14:45:16.305393] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.182 [2024-07-10 14:45:16.407530] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.182 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.183 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:27:04.183 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.183 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.183 [2024-07-10 14:45:16.451571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:27:04.183 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.183 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:04.183 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.183 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.183 [2024-07-10 14:45:16.459479] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:04.183 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.183 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:27:04.183 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.183 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.183 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.183 14:45:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:27:05.177 [2024-07-10 14:45:17.205395] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:05.743 [2024-07-10 14:45:17.805417] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:05.743 [2024-07-10 14:45:17.805472] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:05.743 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:05.743 cookie is 0 00:27:05.743 is_local: 1 00:27:05.743 our_own: 0 00:27:05.743 wide_area: 0 00:27:05.743 multicast: 1 00:27:05.743 cached: 1 00:27:05.743 [2024-07-10 14:45:17.905410] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:05.743 [2024-07-10 14:45:17.905459] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:05.743 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:05.743 cookie is 0 00:27:05.743 is_local: 1 00:27:05.743 our_own: 0 00:27:05.743 wide_area: 0 00:27:05.743 multicast: 1 00:27:05.743 cached: 1 00:27:05.744 [2024-07-10 14:45:17.905474] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:27:05.744 [2024-07-10 14:45:18.005408] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:05.744 [2024-07-10 14:45:18.005457] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:05.744 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:05.744 cookie is 0 00:27:05.744 is_local: 1 00:27:05.744 our_own: 0 00:27:05.744 wide_area: 0 00:27:05.744 multicast: 1 00:27:05.744 cached: 1 00:27:06.002 [2024-07-10 14:45:18.105403] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:06.002 [2024-07-10 14:45:18.105450] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:06.002 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:06.002 cookie is 0 00:27:06.002 is_local: 1 00:27:06.002 our_own: 0 00:27:06.002 wide_area: 0 00:27:06.002 multicast: 1 00:27:06.002 cached: 1 00:27:06.002 [2024-07-10 14:45:18.105464] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:27:06.568 [2024-07-10 14:45:18.809456] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:06.568 [2024-07-10 14:45:18.809515] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:06.568 [2024-07-10 14:45:18.809544] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:06.827 [2024-07-10 14:45:18.895637] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:27:06.827 [2024-07-10 14:45:18.952728] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:06.827 [2024-07-10 14:45:18.952774] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:06.827 [2024-07-10 14:45:19.009161] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:06.827 [2024-07-10 14:45:19.009205] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:06.827 [2024-07-10 14:45:19.009226] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:06.827 [2024-07-10 14:45:19.097327] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:27:07.085 [2024-07-10 14:45:19.160612] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:07.085 [2024-07-10 14:45:19.160662] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:09.619 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.620 14:45:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:27:10.996 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.997 [2024-07-10 14:45:22.982568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:10.997 [2024-07-10 14:45:22.983754] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:10.997 [2024-07-10 14:45:22.983796] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:10.997 [2024-07-10 14:45:22.983834] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:10.997 [2024-07-10 14:45:22.983849] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.997 [2024-07-10 14:45:22.990530] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:10.997 [2024-07-10 14:45:22.990783] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:10.997 [2024-07-10 14:45:22.990837] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.997 14:45:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:27:10.997 [2024-07-10 14:45:23.120882] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:27:10.997 [2024-07-10 14:45:23.121854] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:27:10.997 [2024-07-10 14:45:23.183133] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:10.997 [2024-07-10 14:45:23.183196] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:10.997 [2024-07-10 14:45:23.183207] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:10.997 [2024-07-10 14:45:23.183257] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:10.997 [2024-07-10 14:45:23.183348] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:10.997 [2024-07-10 14:45:23.183365] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:10.997 [2024-07-10 14:45:23.183374] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:10.997 [2024-07-10 14:45:23.183398] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:10.997 [2024-07-10 14:45:23.229064] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:10.997 [2024-07-10 14:45:23.229119] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:10.997 [2024-07-10 14:45:23.229193] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:10.997 [2024-07-10 14:45:23.229207] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:11.931 14:45:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:27:11.931 14:45:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:11.931 14:45:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:11.931 14:45:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:11.931 14:45:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:11.931 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.931 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.931 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.931 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:11.931 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:27:11.931 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.931 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:11.931 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:11.931 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.931 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:11.931 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.931 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.932 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:11.932 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:27:11.932 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:11.932 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:11.932 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.932 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.932 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:11.932 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:11.932 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.192 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:12.192 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:27:12.192 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.193 [2024-07-10 14:45:24.335865] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:12.193 [2024-07-10 14:45:24.335906] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:12.193 [2024-07-10 14:45:24.335943] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:12.193 [2024-07-10 14:45:24.335957] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.193 [2024-07-10 14:45:24.341555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.193 [2024-07-10 14:45:24.341733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.193 [2024-07-10 14:45:24.341872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.193 [2024-07-10 14:45:24.341988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.193 [2024-07-10 14:45:24.342104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.193 [2024-07-10 14:45:24.342227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.193 [2024-07-10 14:45:24.342421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.193 [2024-07-10 14:45:24.342558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.193 [2024-07-10 14:45:24.342684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995a60 is same with the state(5) to be set 00:27:12.193 [2024-07-10 14:45:24.347939] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:12.193 [2024-07-10 14:45:24.348153] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:12.193 [2024-07-10 14:45:24.351501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995a60 (9): Bad file descriptor 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.193 14:45:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:27:12.193 [2024-07-10 14:45:24.352973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.193 [2024-07-10 14:45:24.353196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.193 [2024-07-10 14:45:24.353381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.193 [2024-07-10 14:45:24.353403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.193 [2024-07-10 14:45:24.353419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.193 [2024-07-10 14:45:24.353436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.193 [2024-07-10 14:45:24.353452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.193 [2024-07-10 14:45:24.353467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.193 [2024-07-10 14:45:24.353482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f960 is same with the state(5) to be set 00:27:12.193 [2024-07-10 14:45:24.361561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.193 [2024-07-10 14:45:24.361758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.193 [2024-07-10 14:45:24.361784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995a60 with addr=10.0.0.2, port=4420 00:27:12.193 [2024-07-10 14:45:24.361797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995a60 is same with the state(5) to be set 00:27:12.193 [2024-07-10 14:45:24.361819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995a60 (9): Bad file descriptor 00:27:12.193 [2024-07-10 14:45:24.361836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:12.193 [2024-07-10 14:45:24.361846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:12.193 [2024-07-10 14:45:24.361858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:12.193 [2024-07-10 14:45:24.361880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.193 [2024-07-10 14:45:24.362905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f960 (9): Bad file descriptor 00:27:12.193 [2024-07-10 14:45:24.371665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.193 [2024-07-10 14:45:24.371794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.193 [2024-07-10 14:45:24.371817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995a60 with addr=10.0.0.2, port=4420 00:27:12.193 [2024-07-10 14:45:24.371829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995a60 is same with the state(5) to be set 00:27:12.193 [2024-07-10 14:45:24.371847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995a60 (9): Bad file descriptor 00:27:12.193 [2024-07-10 14:45:24.371863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:12.193 [2024-07-10 14:45:24.371872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:12.193 [2024-07-10 14:45:24.371882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:12.193 [2024-07-10 14:45:24.371898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.193 [2024-07-10 14:45:24.372932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:12.193 [2024-07-10 14:45:24.373019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.193 [2024-07-10 14:45:24.373040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94f960 with addr=10.0.0.3, port=4420 00:27:12.193 [2024-07-10 14:45:24.373051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f960 is same with the state(5) to be set 00:27:12.193 [2024-07-10 14:45:24.373067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f960 (9): Bad file descriptor 00:27:12.193 [2024-07-10 14:45:24.373081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:12.193 [2024-07-10 14:45:24.373090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:12.193 [2024-07-10 14:45:24.373100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:12.193 [2024-07-10 14:45:24.373115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.193 [2024-07-10 14:45:24.381741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.193 [2024-07-10 14:45:24.381870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.193 [2024-07-10 14:45:24.381894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995a60 with addr=10.0.0.2, port=4420 00:27:12.193 [2024-07-10 14:45:24.381905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995a60 is same with the state(5) to be set 00:27:12.193 [2024-07-10 14:45:24.381925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995a60 (9): Bad file descriptor 00:27:12.193 [2024-07-10 14:45:24.381961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:12.193 [2024-07-10 14:45:24.381973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:12.193 [2024-07-10 14:45:24.381984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:12.193 [2024-07-10 14:45:24.381999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.193 [2024-07-10 14:45:24.382986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:12.193 [2024-07-10 14:45:24.383078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.193 [2024-07-10 14:45:24.383100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94f960 with addr=10.0.0.3, port=4420 00:27:12.193 [2024-07-10 14:45:24.383110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f960 is same with the state(5) to be set 00:27:12.193 [2024-07-10 14:45:24.383127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f960 (9): Bad file descriptor 00:27:12.193 [2024-07-10 14:45:24.383143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:12.193 [2024-07-10 14:45:24.383152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:12.193 [2024-07-10 14:45:24.383162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:12.193 [2024-07-10 14:45:24.383177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.193 [2024-07-10 14:45:24.391820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.193 [2024-07-10 14:45:24.391934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.193 [2024-07-10 14:45:24.391957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995a60 with addr=10.0.0.2, port=4420 00:27:12.193 [2024-07-10 14:45:24.391967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995a60 is same with the state(5) to be set 00:27:12.193 [2024-07-10 14:45:24.391985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995a60 (9): Bad file descriptor 00:27:12.193 [2024-07-10 14:45:24.392101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:12.194 [2024-07-10 14:45:24.392116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:12.194 [2024-07-10 14:45:24.392128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:12.194 [2024-07-10 14:45:24.392145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.194 [2024-07-10 14:45:24.393051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:12.194 [2024-07-10 14:45:24.393144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.194 [2024-07-10 14:45:24.393166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94f960 with addr=10.0.0.3, port=4420 00:27:12.194 [2024-07-10 14:45:24.393176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f960 is same with the state(5) to be set 00:27:12.194 [2024-07-10 14:45:24.393192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f960 (9): Bad file descriptor 00:27:12.194 [2024-07-10 14:45:24.393207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:12.194 [2024-07-10 14:45:24.393216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:12.194 [2024-07-10 14:45:24.393226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:12.194 [2024-07-10 14:45:24.393240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.194 [2024-07-10 14:45:24.401894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.194 [2024-07-10 14:45:24.402006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.194 [2024-07-10 14:45:24.402027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995a60 with addr=10.0.0.2, port=4420 00:27:12.194 [2024-07-10 14:45:24.402038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995a60 is same with the state(5) to be set 00:27:12.194 [2024-07-10 14:45:24.402056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995a60 (9): Bad file descriptor 00:27:12.194 [2024-07-10 14:45:24.402089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:12.194 [2024-07-10 14:45:24.402100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:12.194 [2024-07-10 14:45:24.402110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:12.194 [2024-07-10 14:45:24.402126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.194 [2024-07-10 14:45:24.403108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:12.194 [2024-07-10 14:45:24.403189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.194 [2024-07-10 14:45:24.403210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94f960 with addr=10.0.0.3, port=4420 00:27:12.194 [2024-07-10 14:45:24.403220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f960 is same with the state(5) to be set 00:27:12.194 [2024-07-10 14:45:24.403236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f960 (9): Bad file descriptor 00:27:12.194 [2024-07-10 14:45:24.403250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:12.194 [2024-07-10 14:45:24.403259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:12.194 [2024-07-10 14:45:24.403268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:12.194 [2024-07-10 14:45:24.403299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.194 [2024-07-10 14:45:24.411964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.194 [2024-07-10 14:45:24.412072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.194 [2024-07-10 14:45:24.412094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995a60 with addr=10.0.0.2, port=4420 00:27:12.194 [2024-07-10 14:45:24.412105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995a60 is same with the state(5) to be set 00:27:12.194 [2024-07-10 14:45:24.412122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995a60 (9): Bad file descriptor 00:27:12.194 [2024-07-10 14:45:24.412156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:12.194 [2024-07-10 14:45:24.412167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:12.194 [2024-07-10 14:45:24.412177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:12.194 [2024-07-10 14:45:24.412192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.194 [2024-07-10 14:45:24.413159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:12.194 [2024-07-10 14:45:24.413240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.194 [2024-07-10 14:45:24.413261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94f960 with addr=10.0.0.3, port=4420 00:27:12.194 [2024-07-10 14:45:24.413271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f960 is same with the state(5) to be set 00:27:12.194 [2024-07-10 14:45:24.413299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f960 (9): Bad file descriptor 00:27:12.194 [2024-07-10 14:45:24.413316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:12.194 [2024-07-10 14:45:24.413325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:12.194 [2024-07-10 14:45:24.413334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:12.194 [2024-07-10 14:45:24.413349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.194 [2024-07-10 14:45:24.422029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.194 [2024-07-10 14:45:24.422122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.194 [2024-07-10 14:45:24.422143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995a60 with addr=10.0.0.2, port=4420 00:27:12.194 [2024-07-10 14:45:24.422154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995a60 is same with the state(5) to be set 00:27:12.194 [2024-07-10 14:45:24.422170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995a60 (9): Bad file descriptor 00:27:12.194 [2024-07-10 14:45:24.422201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:12.194 [2024-07-10 14:45:24.422212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:12.194 [2024-07-10 14:45:24.422221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:12.194 [2024-07-10 14:45:24.422236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.194 [2024-07-10 14:45:24.423210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:12.194 [2024-07-10 14:45:24.423301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.194 [2024-07-10 14:45:24.423322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94f960 with addr=10.0.0.3, port=4420 00:27:12.194 [2024-07-10 14:45:24.423333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f960 is same with the state(5) to be set 00:27:12.194 [2024-07-10 14:45:24.423348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f960 (9): Bad file descriptor 00:27:12.194 [2024-07-10 14:45:24.423363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:12.194 [2024-07-10 14:45:24.423372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:12.194 [2024-07-10 14:45:24.423382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:12.194 [2024-07-10 14:45:24.423396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.194 [2024-07-10 14:45:24.432088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.194 [2024-07-10 14:45:24.432184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.194 [2024-07-10 14:45:24.432204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995a60 with addr=10.0.0.2, port=4420 00:27:12.194 [2024-07-10 14:45:24.432214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995a60 is same with the state(5) to be set 00:27:12.194 [2024-07-10 14:45:24.432230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995a60 (9): Bad file descriptor 00:27:12.194 [2024-07-10 14:45:24.432264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:12.194 [2024-07-10 14:45:24.432275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:12.194 [2024-07-10 14:45:24.432305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:12.194 [2024-07-10 14:45:24.432324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.194 [2024-07-10 14:45:24.433262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:12.194 [2024-07-10 14:45:24.433352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.194 [2024-07-10 14:45:24.433374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94f960 with addr=10.0.0.3, port=4420 00:27:12.194 [2024-07-10 14:45:24.433384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f960 is same with the state(5) to be set 00:27:12.194 [2024-07-10 14:45:24.433400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f960 (9): Bad file descriptor 00:27:12.194 [2024-07-10 14:45:24.433415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:12.194 [2024-07-10 14:45:24.433424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:12.194 [2024-07-10 14:45:24.433433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:12.194 [2024-07-10 14:45:24.433447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.194 [2024-07-10 14:45:24.442152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.194 [2024-07-10 14:45:24.442263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.194 [2024-07-10 14:45:24.442302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995a60 with addr=10.0.0.2, port=4420 00:27:12.194 [2024-07-10 14:45:24.442316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995a60 is same with the state(5) to be set 00:27:12.194 [2024-07-10 14:45:24.442335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995a60 (9): Bad file descriptor 00:27:12.194 [2024-07-10 14:45:24.442368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:12.194 [2024-07-10 14:45:24.442379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:12.194 [2024-07-10 14:45:24.442389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:12.194 [2024-07-10 14:45:24.442405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.194 [2024-07-10 14:45:24.443321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:12.194 [2024-07-10 14:45:24.443402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.194 [2024-07-10 14:45:24.443422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94f960 with addr=10.0.0.3, port=4420 00:27:12.194 [2024-07-10 14:45:24.443433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f960 is same with the state(5) to be set 00:27:12.194 [2024-07-10 14:45:24.443468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f960 (9): Bad file descriptor 00:27:12.194 [2024-07-10 14:45:24.443484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:12.195 [2024-07-10 14:45:24.443493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:12.195 [2024-07-10 14:45:24.443503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:12.195 [2024-07-10 14:45:24.443518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.195 [2024-07-10 14:45:24.452223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.195 [2024-07-10 14:45:24.452379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.195 [2024-07-10 14:45:24.452403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995a60 with addr=10.0.0.2, port=4420 00:27:12.195 [2024-07-10 14:45:24.452414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995a60 is same with the state(5) to be set 00:27:12.195 [2024-07-10 14:45:24.452431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995a60 (9): Bad file descriptor 00:27:12.195 [2024-07-10 14:45:24.452467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:12.195 [2024-07-10 14:45:24.452477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:12.195 [2024-07-10 14:45:24.452487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:12.195 [2024-07-10 14:45:24.452502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.195 [2024-07-10 14:45:24.453373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:12.195 [2024-07-10 14:45:24.453449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.195 [2024-07-10 14:45:24.453470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94f960 with addr=10.0.0.3, port=4420 00:27:12.195 [2024-07-10 14:45:24.453480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f960 is same with the state(5) to be set 00:27:12.195 [2024-07-10 14:45:24.453496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f960 (9): Bad file descriptor 00:27:12.195 [2024-07-10 14:45:24.453529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:12.195 [2024-07-10 14:45:24.453540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:12.195 [2024-07-10 14:45:24.453550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:12.195 [2024-07-10 14:45:24.453564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.195 [2024-07-10 14:45:24.462344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.195 [2024-07-10 14:45:24.462541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.195 [2024-07-10 14:45:24.462570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995a60 with addr=10.0.0.2, port=4420 00:27:12.195 [2024-07-10 14:45:24.462582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995a60 is same with the state(5) to be set 00:27:12.195 [2024-07-10 14:45:24.462603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995a60 (9): Bad file descriptor 00:27:12.195 [2024-07-10 14:45:24.462641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:12.195 [2024-07-10 14:45:24.462652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:12.195 [2024-07-10 14:45:24.462663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:12.195 [2024-07-10 14:45:24.462679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.195 [2024-07-10 14:45:24.463423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:12.195 [2024-07-10 14:45:24.463504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.195 [2024-07-10 14:45:24.463524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94f960 with addr=10.0.0.3, port=4420 00:27:12.195 [2024-07-10 14:45:24.463534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f960 is same with the state(5) to be set 00:27:12.195 [2024-07-10 14:45:24.463550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f960 (9): Bad file descriptor 00:27:12.195 [2024-07-10 14:45:24.463564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:12.195 [2024-07-10 14:45:24.463573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:12.195 [2024-07-10 14:45:24.463583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:12.195 [2024-07-10 14:45:24.463610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.195 [2024-07-10 14:45:24.472457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.195 [2024-07-10 14:45:24.472619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.195 [2024-07-10 14:45:24.472644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995a60 with addr=10.0.0.2, port=4420 00:27:12.195 [2024-07-10 14:45:24.472656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995a60 is same with the state(5) to be set 00:27:12.195 [2024-07-10 14:45:24.472675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995a60 (9): Bad file descriptor 00:27:12.195 [2024-07-10 14:45:24.472710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:12.195 [2024-07-10 14:45:24.472720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:12.195 [2024-07-10 14:45:24.472731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:12.195 [2024-07-10 14:45:24.472746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.195 [2024-07-10 14:45:24.473475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:12.195 [2024-07-10 14:45:24.473557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.195 [2024-07-10 14:45:24.473578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94f960 with addr=10.0.0.3, port=4420 00:27:12.195 [2024-07-10 14:45:24.473588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f960 is same with the state(5) to be set 00:27:12.195 [2024-07-10 14:45:24.473604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f960 (9): Bad file descriptor 00:27:12.195 [2024-07-10 14:45:24.473631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:12.195 [2024-07-10 14:45:24.473642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:12.195 [2024-07-10 14:45:24.473651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:12.195 [2024-07-10 14:45:24.473666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.195 [2024-07-10 14:45:24.478725] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:27:12.195 [2024-07-10 14:45:24.478756] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:12.195 [2024-07-10 14:45:24.478793] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:12.195 [2024-07-10 14:45:24.478829] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:12.195 [2024-07-10 14:45:24.478846] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:12.195 [2024-07-10 14:45:24.478860] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:12.454 [2024-07-10 14:45:24.564833] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:12.454 [2024-07-10 14:45:24.564931] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:13.388 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.389 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.389 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.389 14:45:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:27:13.646 [2024-07-10 14:45:25.705463] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.580 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.839 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:27:14.839 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:27:14.839 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:27:14.839 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:14.839 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.839 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.839 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.839 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:14.839 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:14.839 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:14.839 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:14.840 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:14.840 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:14.840 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:14.840 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:14.840 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.840 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.840 [2024-07-10 14:45:26.900149] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:27:14.840 2024/07/10 14:45:26 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:14.840 request: 00:27:14.840 { 00:27:14.840 "method": "bdev_nvme_start_mdns_discovery", 00:27:14.840 "params": { 00:27:14.840 "name": "mdns", 00:27:14.840 "svcname": "_nvme-disc._http", 00:27:14.840 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:14.840 } 00:27:14.840 } 00:27:14.840 Got JSON-RPC error response 00:27:14.840 GoRPCClient: error on JSON-RPC call 00:27:14.840 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:14.840 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:14.840 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:14.840 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:14.840 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:14.840 14:45:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:27:15.406 [2024-07-10 14:45:27.488703] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:15.406 [2024-07-10 14:45:27.588705] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:15.406 [2024-07-10 14:45:27.688706] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:15.406 [2024-07-10 14:45:27.688750] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:15.406 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:15.406 cookie is 0 00:27:15.406 is_local: 1 00:27:15.406 our_own: 0 00:27:15.406 wide_area: 0 00:27:15.406 multicast: 1 00:27:15.406 cached: 1 00:27:15.664 [2024-07-10 14:45:27.788712] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:15.664 [2024-07-10 14:45:27.788758] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:15.664 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:15.664 cookie is 0 00:27:15.664 is_local: 1 00:27:15.664 our_own: 0 00:27:15.664 wide_area: 0 00:27:15.664 multicast: 1 00:27:15.664 cached: 1 00:27:15.664 [2024-07-10 14:45:27.788773] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:27:15.664 [2024-07-10 14:45:27.888713] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:15.664 [2024-07-10 14:45:27.888760] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:15.664 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:15.664 cookie is 0 00:27:15.664 is_local: 1 00:27:15.664 our_own: 0 00:27:15.664 wide_area: 0 00:27:15.664 multicast: 1 00:27:15.664 cached: 1 00:27:15.922 [2024-07-10 14:45:27.988718] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:15.922 [2024-07-10 14:45:27.988770] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:15.922 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:15.922 cookie is 0 00:27:15.922 is_local: 1 00:27:15.922 our_own: 0 00:27:15.922 wide_area: 0 00:27:15.922 multicast: 1 00:27:15.922 cached: 1 00:27:15.922 [2024-07-10 14:45:27.988788] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:27:16.489 [2024-07-10 14:45:28.701487] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:16.489 [2024-07-10 14:45:28.701535] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:16.489 [2024-07-10 14:45:28.701555] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:16.747 [2024-07-10 14:45:28.787620] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:27:16.747 [2024-07-10 14:45:28.847979] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:16.747 [2024-07-10 14:45:28.848033] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:16.747 [2024-07-10 14:45:28.901582] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:16.747 [2024-07-10 14:45:28.901630] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:16.747 [2024-07-10 14:45:28.901651] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:16.747 [2024-07-10 14:45:28.987795] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:27:17.005 [2024-07-10 14:45:29.048403] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:17.005 [2024-07-10 14:45:29.048452] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:20.289 14:45:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.289 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:20.289 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:27:20.289 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:20.289 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.289 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:20.289 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.289 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.290 [2024-07-10 14:45:32.090627] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:27:20.290 2024/07/10 14:45:32 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:20.290 request: 00:27:20.290 { 00:27:20.290 "method": "bdev_nvme_start_mdns_discovery", 00:27:20.290 "params": { 00:27:20.290 "name": "cdc", 00:27:20.290 "svcname": "_nvme-disc._tcp", 00:27:20.290 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:20.290 } 00:27:20.290 } 00:27:20.290 Got JSON-RPC error response 00:27:20.290 GoRPCClient: error on JSON-RPC call 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 113360 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 113360 00:27:20.290 [2024-07-10 14:45:32.295719] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 113371 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:27:20.290 Got SIGTERM, quitting. 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:20.290 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:27:20.290 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:20.290 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:20.290 avahi-daemon 0.8 exiting. 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:20.290 rmmod nvme_tcp 00:27:20.290 rmmod nvme_fabrics 00:27:20.290 rmmod nvme_keyring 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 113319 ']' 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 113319 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 113319 ']' 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 113319 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113319 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113319' 00:27:20.290 killing process with pid 113319 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 113319 00:27:20.290 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 113319 00:27:20.548 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:20.548 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:20.548 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:20.548 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:20.548 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:20.548 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.548 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.548 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.548 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:20.548 00:27:20.548 real 0m19.086s 00:27:20.548 user 0m37.983s 00:27:20.548 sys 0m1.886s 00:27:20.548 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:20.548 14:45:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.548 ************************************ 00:27:20.548 END TEST nvmf_mdns_discovery 00:27:20.548 ************************************ 00:27:20.548 14:45:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:20.548 14:45:32 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:27:20.548 14:45:32 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:20.548 14:45:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:20.548 14:45:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.548 14:45:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:20.548 ************************************ 00:27:20.548 START TEST nvmf_host_multipath 00:27:20.548 ************************************ 00:27:20.548 14:45:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:20.548 * Looking for test storage... 00:27:20.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:20.548 14:45:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:20.548 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:20.548 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.548 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.548 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.549 14:45:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:20.814 Cannot find device "nvmf_tgt_br" 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:20.814 Cannot find device "nvmf_tgt_br2" 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:20.814 Cannot find device "nvmf_tgt_br" 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:20.814 Cannot find device "nvmf_tgt_br2" 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:20.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:20.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:20.814 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:20.815 14:45:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:20.815 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:20.815 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:20.815 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:20.815 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:20.815 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:20.815 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:20.815 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:20.815 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:20.815 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:20.815 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:20.815 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:20.815 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:20.815 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:21.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:27:21.083 00:27:21.083 --- 10.0.0.2 ping statistics --- 00:27:21.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.083 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:21.083 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:21.083 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:27:21.083 00:27:21.083 --- 10.0.0.3 ping statistics --- 00:27:21.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.083 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:21.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:27:21.083 00:27:21.083 --- 10.0.0.1 ping statistics --- 00:27:21.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.083 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=113924 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 113924 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 113924 ']' 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:21.083 14:45:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:21.083 [2024-07-10 14:45:33.290102] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:27:21.083 [2024-07-10 14:45:33.290243] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.341 [2024-07-10 14:45:33.423777] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:21.341 [2024-07-10 14:45:33.443972] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:21.341 [2024-07-10 14:45:33.486174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.341 [2024-07-10 14:45:33.486241] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.341 [2024-07-10 14:45:33.486255] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.341 [2024-07-10 14:45:33.486265] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.341 [2024-07-10 14:45:33.486275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.341 [2024-07-10 14:45:33.486451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.341 [2024-07-10 14:45:33.486464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.277 14:45:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:22.277 14:45:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:27:22.277 14:45:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.277 14:45:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:22.277 14:45:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:22.277 14:45:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.277 14:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=113924 00:27:22.277 14:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:22.277 [2024-07-10 14:45:34.481809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.277 14:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:22.535 Malloc0 00:27:22.535 14:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:22.806 14:45:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:23.113 14:45:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:23.371 [2024-07-10 14:45:35.449851] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.371 14:45:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:23.628 [2024-07-10 14:45:35.718033] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:23.628 14:45:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=114023 00:27:23.628 14:45:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:23.628 14:45:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:23.628 14:45:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 114023 /var/tmp/bdevperf.sock 00:27:23.628 14:45:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 114023 ']' 00:27:23.628 14:45:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:23.628 14:45:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:23.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:23.628 14:45:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:23.628 14:45:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:23.628 14:45:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:24.561 14:45:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:24.561 14:45:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:27:24.561 14:45:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:24.819 14:45:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:25.385 Nvme0n1 00:27:25.385 14:45:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:25.644 Nvme0n1 00:27:25.644 14:45:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:27:25.644 14:45:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:26.580 14:45:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:26.580 14:45:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:26.838 14:45:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:27.095 14:45:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:27.095 14:45:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114116 00:27:27.095 14:45:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113924 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:27.095 14:45:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:33.665 Attaching 4 probes... 00:27:33.665 @path[10.0.0.2, 4421]: 16686 00:27:33.665 @path[10.0.0.2, 4421]: 17279 00:27:33.665 @path[10.0.0.2, 4421]: 17393 00:27:33.665 @path[10.0.0.2, 4421]: 17092 00:27:33.665 @path[10.0.0.2, 4421]: 14714 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114116 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:33.665 14:45:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:33.923 14:45:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:33.923 14:45:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114241 00:27:33.923 14:45:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113924 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:33.923 14:45:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:40.480 Attaching 4 probes... 00:27:40.480 @path[10.0.0.2, 4420]: 16391 00:27:40.480 @path[10.0.0.2, 4420]: 16433 00:27:40.480 @path[10.0.0.2, 4420]: 15559 00:27:40.480 @path[10.0.0.2, 4420]: 17080 00:27:40.480 @path[10.0.0.2, 4420]: 16011 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114241 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:40.480 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:40.738 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:27:40.738 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114372 00:27:40.738 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113924 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:40.738 14:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:47.299 14:45:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:47.299 14:45:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:47.299 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:47.299 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:47.299 Attaching 4 probes... 00:27:47.299 @path[10.0.0.2, 4421]: 12162 00:27:47.299 @path[10.0.0.2, 4421]: 17031 00:27:47.299 @path[10.0.0.2, 4421]: 16511 00:27:47.299 @path[10.0.0.2, 4421]: 15958 00:27:47.299 @path[10.0.0.2, 4421]: 16309 00:27:47.299 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:47.299 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:47.299 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:47.299 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:47.299 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:47.299 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:47.299 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114372 00:27:47.299 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:47.299 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:27:47.299 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:47.299 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:47.864 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:27:47.864 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114503 00:27:47.864 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113924 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:47.864 14:45:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:54.425 14:46:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:54.425 14:46:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:27:54.425 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:27:54.425 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:54.425 Attaching 4 probes... 00:27:54.425 00:27:54.425 00:27:54.425 00:27:54.425 00:27:54.425 00:27:54.425 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:54.425 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:54.425 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:54.425 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:27:54.425 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:27:54.425 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:27:54.425 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114503 00:27:54.425 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:54.425 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:27:54.425 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:54.425 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:54.683 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:27:54.683 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114634 00:27:54.683 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113924 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:54.683 14:46:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:01.256 14:46:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:01.256 14:46:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:01.256 14:46:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:01.256 14:46:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:01.256 Attaching 4 probes... 00:28:01.256 @path[10.0.0.2, 4421]: 15277 00:28:01.256 @path[10.0.0.2, 4421]: 16144 00:28:01.256 @path[10.0.0.2, 4421]: 16288 00:28:01.256 @path[10.0.0.2, 4421]: 16597 00:28:01.256 @path[10.0.0.2, 4421]: 16241 00:28:01.256 14:46:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:01.256 14:46:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:01.256 14:46:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:01.256 14:46:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:01.256 14:46:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:01.256 14:46:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:01.256 14:46:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114634 00:28:01.256 14:46:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:01.256 14:46:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:01.256 [2024-07-10 14:46:13.435114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 [2024-07-10 14:46:13.435499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23f00 is same with the state(5) to be set 00:28:01.256 14:46:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:28:02.190 14:46:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:28:02.190 14:46:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114764 00:28:02.191 14:46:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113924 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:02.191 14:46:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:08.820 14:46:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:08.820 14:46:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:08.820 14:46:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:08.820 14:46:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:08.820 Attaching 4 probes... 00:28:08.820 @path[10.0.0.2, 4420]: 16240 00:28:08.820 @path[10.0.0.2, 4420]: 16134 00:28:08.820 @path[10.0.0.2, 4420]: 16803 00:28:08.820 @path[10.0.0.2, 4420]: 16260 00:28:08.820 @path[10.0.0.2, 4420]: 16534 00:28:08.820 14:46:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:08.820 14:46:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:08.820 14:46:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:08.820 14:46:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:08.820 14:46:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:08.820 14:46:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:08.820 14:46:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114764 00:28:08.820 14:46:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:08.820 14:46:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:08.820 [2024-07-10 14:46:21.060978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:08.820 14:46:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:09.077 14:46:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:28:15.634 14:46:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:15.634 14:46:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114956 00:28:15.634 14:46:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113924 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:15.634 14:46:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:22.300 Attaching 4 probes... 00:28:22.300 @path[10.0.0.2, 4421]: 16089 00:28:22.300 @path[10.0.0.2, 4421]: 16246 00:28:22.300 @path[10.0.0.2, 4421]: 16327 00:28:22.300 @path[10.0.0.2, 4421]: 16450 00:28:22.300 @path[10.0.0.2, 4421]: 16294 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114956 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 114023 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 114023 ']' 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 114023 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114023 00:28:22.300 killing process with pid 114023 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114023' 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 114023 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 114023 00:28:22.300 Connection closed with partial response: 00:28:22.300 00:28:22.300 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 114023 00:28:22.300 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:22.300 [2024-07-10 14:45:35.786573] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:28:22.300 [2024-07-10 14:45:35.786686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114023 ] 00:28:22.300 [2024-07-10 14:45:35.904651] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:22.300 [2024-07-10 14:45:35.923207] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.300 [2024-07-10 14:45:35.959259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.300 Running I/O for 90 seconds... 00:28:22.300 [2024-07-10 14:45:46.078598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.300 [2024-07-10 14:45:46.078673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.300 [2024-07-10 14:45:46.078751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.300 [2024-07-10 14:45:46.078773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:22.300 [2024-07-10 14:45:46.078797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.300 [2024-07-10 14:45:46.078813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:22.300 [2024-07-10 14:45:46.078836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.300 [2024-07-10 14:45:46.078852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:22.300 [2024-07-10 14:45:46.078873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.300 [2024-07-10 14:45:46.078888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:22.300 [2024-07-10 14:45:46.078910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.300 [2024-07-10 14:45:46.078925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:22.300 [2024-07-10 14:45:46.078947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.300 [2024-07-10 14:45:46.078962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.078983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.301 [2024-07-10 14:45:46.078999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.079021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.301 [2024-07-10 14:45:46.079037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.079058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.301 [2024-07-10 14:45:46.079073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.079124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.301 [2024-07-10 14:45:46.079141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.079610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.301 [2024-07-10 14:45:46.079638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.079665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.301 [2024-07-10 14:45:46.079683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.079705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.301 [2024-07-10 14:45:46.079720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.079742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.301 [2024-07-10 14:45:46.079757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.079779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.301 [2024-07-10 14:45:46.079794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.079816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.301 [2024-07-10 14:45:46.079831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.079853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.301 [2024-07-10 14:45:46.079868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.079891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.079906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.079929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.079945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.079967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.079982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.080980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.080995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.081024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.301 [2024-07-10 14:45:46.081039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:22.301 [2024-07-10 14:45:46.081061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.081753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.081769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.082473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.082500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.082525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.082542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.082564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.082580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.082603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.082619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.082642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.082658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.082689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.082707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.082730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.082746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.082769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.082785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.082807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.082823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.082846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.082862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.082885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.082900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.082923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.082939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.082962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.082978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.083001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.083017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.083040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.083057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.083080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.083096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.083119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.083135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.083158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.083181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.083204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.083221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.083243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.083259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.083304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.083325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.083348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.083364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.083387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.083404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.084247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.302 [2024-07-10 14:45:46.084275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.302 [2024-07-10 14:45:46.084321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.084976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.084991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.085013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.085028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.085062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.085078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.085100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.085115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.085136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.085151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.085173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.085188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:46.085213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:46.085229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.601631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.303 [2024-07-10 14:45:52.601712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.601751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.601770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.601793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.601808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.601830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.601845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.601867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.601882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.601903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.601919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.601940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.601956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.602002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.602019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.602040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.602056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.602077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.602092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.602114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.602128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.602150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.602165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.602186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.602201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.602223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.602238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.602259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.602274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.602312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.602328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.602350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.602365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.603178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.603206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.603233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.303 [2024-07-10 14:45:52.603250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:22.303 [2024-07-10 14:45:52.603272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.603984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.603999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:22.304 [2024-07-10 14:45:52.604560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.304 [2024-07-10 14:45:52.604575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.604602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.604617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.604639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.604654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.604675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.604692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.604721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.604737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.604759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.604774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.604796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.604812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.604833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.604848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.604888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.604907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.604930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.604945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.604967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.604982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.605004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.605019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.605041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.605056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.605078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.605094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.605878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.605906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.605934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.605952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.605974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.606001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.606025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.606041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.606063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.606078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.606100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.606115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.606137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.305 [2024-07-10 14:45:52.606153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.606175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.305 [2024-07-10 14:45:52.606191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.606213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.305 [2024-07-10 14:45:52.606228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.606250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.305 [2024-07-10 14:45:52.606265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.606302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.305 [2024-07-10 14:45:52.606321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.606343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.305 [2024-07-10 14:45:52.606359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.606381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.305 [2024-07-10 14:45:52.606396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.606418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.305 [2024-07-10 14:45:52.606433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.607562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.305 [2024-07-10 14:45:52.607604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.607630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.305 [2024-07-10 14:45:52.607647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.607669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.305 [2024-07-10 14:45:52.607684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.607706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.305 [2024-07-10 14:45:52.607721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.607751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.607767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.607788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.607804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.607826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.607841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.607863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.607878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.607900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.607915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.607941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.607957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.607979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.607994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.608016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.608032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.608053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.608069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.608103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.608120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:22.305 [2024-07-10 14:45:52.608142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.305 [2024-07-10 14:45:52.608157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.608179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.608194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.608215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.608231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.608252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.608268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.608303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.608321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.609982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.609997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.610019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.610034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.610056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.610071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.610092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.610108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.610129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.610144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.610166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.610181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.610203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.610228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.610252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.610268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.610302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.610322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.610345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.610360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.610382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.610407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.610429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.610443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.610465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.610480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:22.306 [2024-07-10 14:45:52.610502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.306 [2024-07-10 14:45:52.610517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.610539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.610555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.610576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.610592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.610613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.610629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.610650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.610665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.610687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.610702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.610822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.610840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.610862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.610878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.610899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.610915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.610936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.610951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.610973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.610988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.611598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.611613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.612500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.612529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.612557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.612575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.612598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.612613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.612635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.612669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.612693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.612709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.612732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.612747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.612772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.612789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.612811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.612826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.612848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.612887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.612913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.612929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.612951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.612966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.612988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.613004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.613025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.613041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.613063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.613078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.613100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.613115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.613137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.307 [2024-07-10 14:45:52.613164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:22.307 [2024-07-10 14:45:52.613187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.613967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.613982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.308 [2024-07-10 14:45:52.614019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.308 [2024-07-10 14:45:52.614059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.308 [2024-07-10 14:45:52.614096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.308 [2024-07-10 14:45:52.614139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.308 [2024-07-10 14:45:52.614177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.308 [2024-07-10 14:45:52.614214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.308 [2024-07-10 14:45:52.614251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.308 [2024-07-10 14:45:52.614304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.308 [2024-07-10 14:45:52.614345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.308 [2024-07-10 14:45:52.614382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.308 [2024-07-10 14:45:52.614421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.308 [2024-07-10 14:45:52.614461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.614499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.614536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:22.308 [2024-07-10 14:45:52.614558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.308 [2024-07-10 14:45:52.614573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.614595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.614617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.614640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.614656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.614680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.614696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.614718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.614733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.631242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.631347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.631393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.631421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.631455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.631481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.631514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.631540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.631574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.631598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.631631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.631657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.631692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.631717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.632981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.633033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.633089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.633167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.633216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.633249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.633309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.633345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.633386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.633418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.633457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.633489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.633529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.633560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.633600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.633630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.633670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.633701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.633741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.633772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.633813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.633845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.633884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.633911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.633950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.633978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.634047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.634147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.634216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.634307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.634378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.634445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.634513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.634582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.634650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.634717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.634783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.634853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.634920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.634986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.635020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.635059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.635090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.635126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.635155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.635194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.635224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:22.309 [2024-07-10 14:45:52.635261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.309 [2024-07-10 14:45:52.635332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.635381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.635415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.635455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.635485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.635523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.635552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.635591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.635621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.635657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.635688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.635728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.635758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.635797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.635826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.635864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.635923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.635963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.635993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.636063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.636132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.636200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.636269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.636366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.636438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.636504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.636569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.636638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.636708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.636803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.636889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.636958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.636997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.637028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.637065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.637097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.637136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.637165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.637214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.637242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.637299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.637333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.637372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.637400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.637437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.637464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.637503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.637532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.637569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.637598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.637637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.637668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.637738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.637771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.638907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.638956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.639007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.639039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.639079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.639109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.639149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.639179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.639216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.639246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.639304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.639338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.639376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.639408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.639446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.639474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:22.310 [2024-07-10 14:45:52.639511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.310 [2024-07-10 14:45:52.639542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.639580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.639609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.639646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.639673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.639741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.639772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.639809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.639837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.639873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.639902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.639941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.639970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.640962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.640989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.641056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.641122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.641187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.641250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.641339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.641410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.641505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.641577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.641644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.311 [2024-07-10 14:45:52.641712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.311 [2024-07-10 14:45:52.641777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.311 [2024-07-10 14:45:52.641845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.311 [2024-07-10 14:45:52.641909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.641946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.311 [2024-07-10 14:45:52.641975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.642015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.311 [2024-07-10 14:45:52.642045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.642082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.311 [2024-07-10 14:45:52.642110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.642146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.311 [2024-07-10 14:45:52.642175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.642213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.311 [2024-07-10 14:45:52.642242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.642297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.311 [2024-07-10 14:45:52.642329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.642394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.311 [2024-07-10 14:45:52.642426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.642461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.311 [2024-07-10 14:45:52.642488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:22.311 [2024-07-10 14:45:52.642524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.311 [2024-07-10 14:45:52.642552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.642591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.642620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.642658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.642687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.642723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.642752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.642788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.642816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.642853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.642881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.642918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.642946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.642984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.643013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.643051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.643080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.643116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.643145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.643212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.643244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.643297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.643330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.643370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.643400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.644499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.644547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.644598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.644630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.644672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.644702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.644738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.644768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.644806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.644835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.644888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.644921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.644960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.644990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.645949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.645978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.646015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.646061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.646101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.646132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.646169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.646199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.646237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.646265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.646325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.646357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.646395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.646424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:22.312 [2024-07-10 14:45:52.646461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.312 [2024-07-10 14:45:52.646489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.646526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.646554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.646592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.646622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.646660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.646688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.646726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.646756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.646794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.646825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.646863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.646892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.646949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.646979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.647942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.647980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.648941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.648979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.313 [2024-07-10 14:45:52.649009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:22.313 [2024-07-10 14:45:52.650116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.650164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.650214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.650246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.650308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.650342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.650381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.650410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.650450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.650483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.650522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.650551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.650588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.650644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.650687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.650716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.650752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.650781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.650819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.650849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.650887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.650916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.650954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.650984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.651953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.651983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.652049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.652113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.652181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.652248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.652341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.652440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.652506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.652571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.652640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.652707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.652772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.652836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.314 [2024-07-10 14:45:52.652917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.652956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.314 [2024-07-10 14:45:52.652986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.653023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.314 [2024-07-10 14:45:52.653052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.653091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.314 [2024-07-10 14:45:52.653122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:22.314 [2024-07-10 14:45:52.653161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.315 [2024-07-10 14:45:52.653192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.653228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.315 [2024-07-10 14:45:52.653304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.653347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.315 [2024-07-10 14:45:52.653376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.653414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.315 [2024-07-10 14:45:52.653445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.653481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.315 [2024-07-10 14:45:52.653510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.653548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.315 [2024-07-10 14:45:52.653575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.653612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.315 [2024-07-10 14:45:52.653639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.653676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.315 [2024-07-10 14:45:52.653703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.653741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.315 [2024-07-10 14:45:52.653772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.653811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.653840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.653875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.653905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.653943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.653972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.654009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.654039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.654078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.654133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.654176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.654207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.654246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.654275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.654331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.654362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.654400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.654430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.654468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.654495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.654531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.654559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.654598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.654627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.655014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.655061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.655141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.655178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.655228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.655259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.655326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.655359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.655405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.655437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.655512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.655543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.655588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.655618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.655662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.655692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.655735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.655765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.655810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.655839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.655881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.655908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.655950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.655979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.656022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.656053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.656096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.656125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.656168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.656196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.656240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.656272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.656342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.656373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.656446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.656479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.656523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.656552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.656595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.656623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.656666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.315 [2024-07-10 14:45:52.656698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:22.315 [2024-07-10 14:45:52.656742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.656772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.656817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.656847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.656906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.656937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.656981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.657011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.657054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.657085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.657130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.657161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.657205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.657237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.657300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.657335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.657382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.657441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.657487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.657520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.657565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.657593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.657637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.657667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.657710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.657739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.657782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.657812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.657854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.657881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.657923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.657952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.657995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.658026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.658069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.658100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.658145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.658175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.658223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.658253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.658319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.658379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.658426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.658457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.658502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.658530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.658574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.658603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.658647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.658675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.658718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.658746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.658789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.658818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.658861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.658889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.658930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.658960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.659003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.659033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.659078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.659108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.659151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.659180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.659223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.659255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.659348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.659382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.659426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.659456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.659501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.659532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.659576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.659605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.659649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.659677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.659719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.659749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.659792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.659817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.659856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.659885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.659929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.316 [2024-07-10 14:45:52.659959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:22.316 [2024-07-10 14:45:52.660001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:52.660032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:52.660306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:52.660349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.858965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.858995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.859011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.859033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.859049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.859071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.859086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.859111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.859138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.859174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.859202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.859227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.859243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.859265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.859295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.860576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.860600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.860623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.860650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.860674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.860690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.860712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.860727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.860748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.860764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.860786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.860801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.860825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.860841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.860881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.860900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.861759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.861790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:22.317 [2024-07-10 14:45:59.861819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.317 [2024-07-10 14:45:59.861837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.861860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.861875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.861897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.861912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.861935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.861950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.861973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.861988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.862966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.862988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.863011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.863026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.863048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.863065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.863088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.318 [2024-07-10 14:45:59.863104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.863126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.318 [2024-07-10 14:45:59.863142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.863164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.318 [2024-07-10 14:45:59.863179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.863201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.318 [2024-07-10 14:45:59.863216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.863238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.318 [2024-07-10 14:45:59.863254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.863276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.318 [2024-07-10 14:45:59.863303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.863333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.318 [2024-07-10 14:45:59.863348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.863370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.318 [2024-07-10 14:45:59.863386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.863408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.863423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.863445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.318 [2024-07-10 14:45:59.863467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:22.318 [2024-07-10 14:45:59.863490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.863506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.863528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.863543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.863565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.863580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.863603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.863618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.863640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.863655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.863677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.863693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.863715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.863730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.863752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.863767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.863789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.863804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.863827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.863842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.864517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.864552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.864583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.319 [2024-07-10 14:45:59.864600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.864635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.319 [2024-07-10 14:45:59.864651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.864674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.319 [2024-07-10 14:45:59.864689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.864711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.319 [2024-07-10 14:45:59.864726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.864748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.319 [2024-07-10 14:45:59.864763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.864786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.319 [2024-07-10 14:45:59.864802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.864823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.319 [2024-07-10 14:45:59.864838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.864877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.864898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.864920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.864935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.864958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.864973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.864995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:22.319 [2024-07-10 14:45:59.865834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.319 [2024-07-10 14:45:59.865849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.865871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.865886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.865908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.865923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.865945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.865960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.865982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.865997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.866949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.866972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.867930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.867970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.868001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.868038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.868067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.868106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.868136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:22.320 [2024-07-10 14:45:59.869517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.320 [2024-07-10 14:45:59.869577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.869633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.869691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.869735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.869762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.869804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.869832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.869869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.869900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.869943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.869975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.870974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.870989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.871027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.871065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.871101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.871139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.871195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.871233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.871271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.871328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.871365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.871412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.871450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.321 [2024-07-10 14:45:59.871490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.321 [2024-07-10 14:45:59.871529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.321 [2024-07-10 14:45:59.871567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:22.321 [2024-07-10 14:45:59.871589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.321 [2024-07-10 14:45:59.871604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.871626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.322 [2024-07-10 14:45:59.871641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.871671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.322 [2024-07-10 14:45:59.871687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.871710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.322 [2024-07-10 14:45:59.871726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.871748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.322 [2024-07-10 14:45:59.871763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.871786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.322 [2024-07-10 14:45:59.871801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.871824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.871841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.871878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.871906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.871945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.871974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.872043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.872115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.872184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.872251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.872319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.872371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.872410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.872451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.872489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.872527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.322 [2024-07-10 14:45:59.872564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.322 [2024-07-10 14:45:59.872602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.322 [2024-07-10 14:45:59.872640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.322 [2024-07-10 14:45:59.872677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.322 [2024-07-10 14:45:59.872714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.322 [2024-07-10 14:45:59.872751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.322 [2024-07-10 14:45:59.872788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.872810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.872832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.892828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.892917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.892957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.892981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.893013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.893037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.893072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.893094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.894461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.894516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.894560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.894586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.894620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.894642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.894674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.894696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.894728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.894750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.894782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.894803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.894835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.894857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.894889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.894911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.894970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.894994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.895027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.895049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.895080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.895102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:22.322 [2024-07-10 14:45:59.895134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.322 [2024-07-10 14:45:59.895156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.895948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.895971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.896963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.896995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.897017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.897049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.897082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.897116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.897139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.897171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.897202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.897234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.897257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.897304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.897330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.897363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.897384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.897417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.897439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.897471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.897493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.897525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.323 [2024-07-10 14:45:59.897547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:22.323 [2024-07-10 14:45:59.897579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.897601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.897633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.897655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.897686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.897713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.897757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.897779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.897822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.897845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.897877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.897899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.897932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.897954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.899041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.899080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.899119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.899143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.899177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.899201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.899233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.899255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.899304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.899329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.899362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.899384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.899421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.899458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.899513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.899554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.899611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.899651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.899724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.899766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.899819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.899859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.899912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.899968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.900021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.900060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.900110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.900149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.900200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.900238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.900308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.900346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.900396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.900434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.900482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.900519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.900572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.900613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.900666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.900706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.900760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.900800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.900871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.900940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.900996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.901042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.901077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.901102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.901134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.901158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.901195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.901221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.901261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.901290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.901359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.901394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.901430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.901454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.901488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.901517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.901553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.901584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.901621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.901651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.901690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.901721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.901756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.901796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.901833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.324 [2024-07-10 14:45:59.901861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:22.324 [2024-07-10 14:45:59.901900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.324 [2024-07-10 14:45:59.901929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.901968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.901998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.902066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.902132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.902191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.902258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.902330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.902378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.902426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.902463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.902500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.902551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.902588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.902624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.902661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.902709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.902746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.902782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.902819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.902856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.902892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.902928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.902965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.902994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.903010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.903032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.903047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.903069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.903084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.903106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.903121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.903143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.325 [2024-07-10 14:45:59.903158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.903179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.903195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.903217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.903232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.903255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.903298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.903332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.903359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.904370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.904414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.904459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.904478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.904500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.904516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.904537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.325 [2024-07-10 14:45:59.904566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:22.325 [2024-07-10 14:45:59.904589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.904605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.904626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.904641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.904663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.904678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.904701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.904717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.904738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.904753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.904775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.904791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.904812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.904827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.904849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.904883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.904906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.904922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.904944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.904959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.904981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.905952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.905973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.906013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.906040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.906070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.906086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.906109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.906124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.906145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.906161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.906183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.906198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.906219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.326 [2024-07-10 14:45:59.906234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:22.326 [2024-07-10 14:45:59.906256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.906271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.906318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.906349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.906396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.906427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.906464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.906489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.906521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.906545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.906580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.906616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.906653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.906684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.906722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.906770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.906811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.906841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.906879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.906906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.906945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.906971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.907007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.907037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.907073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.907092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.907113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.922761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.922838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.922859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.923973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.923989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.924015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.924030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.924065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.924081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.924107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.924123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.924149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.924165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.924191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.924206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.924233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.924249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.924275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.924307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.924335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.924369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.924405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.924423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.924450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.924465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.924491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.327 [2024-07-10 14:45:59.924507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:22.327 [2024-07-10 14:45:59.924534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.924550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.924577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.924592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.924627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.924644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.924670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.924686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.924724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.924755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.924803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.924831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.924911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.924952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.925011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.925050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.925112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.925152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.925211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.925248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.925332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.925376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.925437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.925480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.925532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.925568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.925625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.925661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.925721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.925782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.925842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.925884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.925942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.925986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.926057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.926099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.926163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.926201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.926265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.926329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.926387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.926428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.926489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.926528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.926593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.926622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.926660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.926683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.926720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.926741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.926778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.926799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.926836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.926872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.926912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.926933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.926970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.926990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.927027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.328 [2024-07-10 14:45:59.927048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.927085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.927116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.927153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.927173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.927211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.927232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.927268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.927310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.927352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.927388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.927440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.927464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.927501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.328 [2024-07-10 14:45:59.927523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:22.328 [2024-07-10 14:45:59.927559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.329 [2024-07-10 14:45:59.927581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:45:59.927618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.329 [2024-07-10 14:45:59.927648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:45:59.927699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.329 [2024-07-10 14:45:59.927722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:45:59.928005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.329 [2024-07-10 14:45:59.928038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.437613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.437661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.437691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.437707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.437722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.437736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.437751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.437765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.437781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.437794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.437809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.437822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.437837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.437851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.437866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.437879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.437895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.437918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.437934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.437947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.437962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.437996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.329 [2024-07-10 14:46:13.438563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.329 [2024-07-10 14:46:13.438578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.330 [2024-07-10 14:46:13.438592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.330 [2024-07-10 14:46:13.438621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.330 [2024-07-10 14:46:13.438650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.330 [2024-07-10 14:46:13.438680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.330 [2024-07-10 14:46:13.438710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.330 [2024-07-10 14:46:13.438739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.330 [2024-07-10 14:46:13.438775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.330 [2024-07-10 14:46:13.438805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.330 [2024-07-10 14:46:13.438834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.438864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.438893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.438922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.438952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.438980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.438995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.330 [2024-07-10 14:46:13.439764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.330 [2024-07-10 14:46:13.439777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.439793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.439806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.439822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.439835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.439851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.439870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.439887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.439900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.439916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.439938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.439954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.439968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.439983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.439997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.440026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.440055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.440084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.440113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.440143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.440172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.440201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.440230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.440259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.331 [2024-07-10 14:46:13.440301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.331 [2024-07-10 14:46:13.440363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112864 len:8 PRP1 0x0 PRP2 0x0 00:28:22.331 [2024-07-10 14:46:13.440377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.331 [2024-07-10 14:46:13.440409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.331 [2024-07-10 14:46:13.440419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112872 len:8 PRP1 0x0 PRP2 0x0 00:28:22.331 [2024-07-10 14:46:13.440432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.331 [2024-07-10 14:46:13.440456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.331 [2024-07-10 14:46:13.440466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112880 len:8 PRP1 0x0 PRP2 0x0 00:28:22.331 [2024-07-10 14:46:13.440479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.331 [2024-07-10 14:46:13.440501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.331 [2024-07-10 14:46:13.440511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112888 len:8 PRP1 0x0 PRP2 0x0 00:28:22.331 [2024-07-10 14:46:13.440524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.331 [2024-07-10 14:46:13.440547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.331 [2024-07-10 14:46:13.440557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112896 len:8 PRP1 0x0 PRP2 0x0 00:28:22.331 [2024-07-10 14:46:13.440570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.331 [2024-07-10 14:46:13.440593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.331 [2024-07-10 14:46:13.440604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112904 len:8 PRP1 0x0 PRP2 0x0 00:28:22.331 [2024-07-10 14:46:13.440617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.331 [2024-07-10 14:46:13.440640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.331 [2024-07-10 14:46:13.440651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112912 len:8 PRP1 0x0 PRP2 0x0 00:28:22.331 [2024-07-10 14:46:13.440663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.331 [2024-07-10 14:46:13.440686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.331 [2024-07-10 14:46:13.440696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112920 len:8 PRP1 0x0 PRP2 0x0 00:28:22.331 [2024-07-10 14:46:13.440716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.331 [2024-07-10 14:46:13.440740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.331 [2024-07-10 14:46:13.440750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112928 len:8 PRP1 0x0 PRP2 0x0 00:28:22.331 [2024-07-10 14:46:13.440763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.331 [2024-07-10 14:46:13.440786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.331 [2024-07-10 14:46:13.440796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112936 len:8 PRP1 0x0 PRP2 0x0 00:28:22.331 [2024-07-10 14:46:13.440809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.331 [2024-07-10 14:46:13.440842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.331 [2024-07-10 14:46:13.440864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112944 len:8 PRP1 0x0 PRP2 0x0 00:28:22.331 [2024-07-10 14:46:13.440879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.331 [2024-07-10 14:46:13.440893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.331 [2024-07-10 14:46:13.440903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.331 [2024-07-10 14:46:13.440913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112952 len:8 PRP1 0x0 PRP2 0x0 00:28:22.331 [2024-07-10 14:46:13.440926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.440940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.440949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.440959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112960 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.440972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.440985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.440995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112968 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112976 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112984 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112992 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113000 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113008 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113016 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113024 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113032 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113040 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113048 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113056 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113064 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113072 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113080 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113088 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113096 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113104 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113112 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113120 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.441962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.441972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.332 [2024-07-10 14:46:13.441982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113128 len:8 PRP1 0x0 PRP2 0x0 00:28:22.332 [2024-07-10 14:46:13.441996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.332 [2024-07-10 14:46:13.442009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.332 [2024-07-10 14:46:13.442019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.333 [2024-07-10 14:46:13.442029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113136 len:8 PRP1 0x0 PRP2 0x0 00:28:22.333 [2024-07-10 14:46:13.442042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.333 [2024-07-10 14:46:13.442056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.333 [2024-07-10 14:46:13.442065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.333 [2024-07-10 14:46:13.442075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113144 len:8 PRP1 0x0 PRP2 0x0 00:28:22.333 [2024-07-10 14:46:13.442088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.333 [2024-07-10 14:46:13.442102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.333 [2024-07-10 14:46:13.442112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.333 [2024-07-10 14:46:13.442122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113152 len:8 PRP1 0x0 PRP2 0x0 00:28:22.333 [2024-07-10 14:46:13.442135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.333 [2024-07-10 14:46:13.442148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.333 [2024-07-10 14:46:13.442158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.333 [2024-07-10 14:46:13.442170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113160 len:8 PRP1 0x0 PRP2 0x0 00:28:22.333 [2024-07-10 14:46:13.442184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.333 [2024-07-10 14:46:13.442198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.333 [2024-07-10 14:46:13.442207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.333 [2024-07-10 14:46:13.442218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113168 len:8 PRP1 0x0 PRP2 0x0 00:28:22.333 [2024-07-10 14:46:13.442236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.333 [2024-07-10 14:46:13.442250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.333 [2024-07-10 14:46:13.442260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.333 [2024-07-10 14:46:13.442270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113176 len:8 PRP1 0x0 PRP2 0x0 00:28:22.333 [2024-07-10 14:46:13.442294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.333 [2024-07-10 14:46:13.442310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.333 [2024-07-10 14:46:13.442322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.333 [2024-07-10 14:46:13.442332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113184 len:8 PRP1 0x0 PRP2 0x0 00:28:22.333 [2024-07-10 14:46:13.442346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.333 [2024-07-10 14:46:13.442359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.333 [2024-07-10 14:46:13.442369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.333 [2024-07-10 14:46:13.442379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113192 len:8 PRP1 0x0 PRP2 0x0 00:28:22.333 [2024-07-10 14:46:13.442393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.333 [2024-07-10 14:46:13.442443] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbc84c0 was disconnected and freed. reset controller. 00:28:22.333 [2024-07-10 14:46:13.443952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.333 [2024-07-10 14:46:13.444057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba6750 (9): Bad file descriptor 00:28:22.333 [2024-07-10 14:46:13.444201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.333 [2024-07-10 14:46:13.444234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba6750 with addr=10.0.0.2, port=4421 00:28:22.333 [2024-07-10 14:46:13.444251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba6750 is same with the state(5) to be set 00:28:22.333 [2024-07-10 14:46:13.444296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba6750 (9): Bad file descriptor 00:28:22.333 [2024-07-10 14:46:13.444325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.333 [2024-07-10 14:46:13.444341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.333 [2024-07-10 14:46:13.444357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.333 [2024-07-10 14:46:13.444382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.333 [2024-07-10 14:46:13.444398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.333 [2024-07-10 14:46:23.550177] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:22.333 Received shutdown signal, test time was about 55.752558 seconds 00:28:22.333 00:28:22.333 Latency(us) 00:28:22.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.333 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:22.333 Verification LBA range: start 0x0 length 0x4000 00:28:22.333 Nvme0n1 : 55.75 7037.40 27.49 0.00 0.00 18157.22 1057.51 7107438.78 00:28:22.333 =================================================================================================================== 00:28:22.333 Total : 7037.40 27.49 0.00 0.00 18157.22 1057.51 7107438.78 00:28:22.333 14:46:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:22.333 rmmod nvme_tcp 00:28:22.333 rmmod nvme_fabrics 00:28:22.333 rmmod nvme_keyring 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 113924 ']' 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 113924 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 113924 ']' 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 113924 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113924 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113924' 00:28:22.333 killing process with pid 113924 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 113924 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 113924 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:22.333 00:28:22.333 real 1m1.681s 00:28:22.333 user 2m55.407s 00:28:22.333 sys 0m13.516s 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:22.333 14:46:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:22.333 ************************************ 00:28:22.333 END TEST nvmf_host_multipath 00:28:22.333 ************************************ 00:28:22.333 14:46:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:22.333 14:46:34 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:22.333 14:46:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:22.333 14:46:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:22.333 14:46:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:22.333 ************************************ 00:28:22.333 START TEST nvmf_timeout 00:28:22.333 ************************************ 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:22.333 * Looking for test storage... 00:28:22.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:28:22.333 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.334 14:46:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.593 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:22.593 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:22.593 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:22.593 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:22.593 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:22.594 Cannot find device "nvmf_tgt_br" 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:22.594 Cannot find device "nvmf_tgt_br2" 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:22.594 Cannot find device "nvmf_tgt_br" 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:22.594 Cannot find device "nvmf_tgt_br2" 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:22.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:22.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:22.594 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:22.853 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:22.853 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:22.853 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:22.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:28:22.853 00:28:22.853 --- 10.0.0.2 ping statistics --- 00:28:22.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.853 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:28:22.853 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:22.853 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:22.853 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:28:22.853 00:28:22.853 --- 10.0.0.3 ping statistics --- 00:28:22.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.853 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:28:22.853 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:22.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:28:22.853 00:28:22.853 --- 10.0.0.1 ping statistics --- 00:28:22.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.853 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:28:22.853 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.853 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:28:22.853 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:22.853 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.853 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.853 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.853 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=115265 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 115265 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115265 ']' 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:22.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:22.854 14:46:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:22.854 [2024-07-10 14:46:35.005532] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:28:22.854 [2024-07-10 14:46:35.006345] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.854 [2024-07-10 14:46:35.131968] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:23.112 [2024-07-10 14:46:35.152199] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:23.112 [2024-07-10 14:46:35.191793] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.112 [2024-07-10 14:46:35.191856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.112 [2024-07-10 14:46:35.191870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.112 [2024-07-10 14:46:35.191881] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.112 [2024-07-10 14:46:35.191890] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.112 [2024-07-10 14:46:35.192072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.112 [2024-07-10 14:46:35.192084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.112 14:46:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:23.112 14:46:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:23.112 14:46:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:23.112 14:46:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:23.112 14:46:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:23.112 14:46:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.112 14:46:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:23.112 14:46:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:23.370 [2024-07-10 14:46:35.580386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.370 14:46:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:23.628 Malloc0 00:28:23.628 14:46:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:24.195 14:46:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:24.195 14:46:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.453 [2024-07-10 14:46:36.627786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.453 14:46:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:24.453 14:46:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=115343 00:28:24.453 14:46:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 115343 /var/tmp/bdevperf.sock 00:28:24.453 14:46:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115343 ']' 00:28:24.453 14:46:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:24.453 14:46:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:24.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:24.453 14:46:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:24.453 14:46:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:24.453 14:46:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:24.453 [2024-07-10 14:46:36.693401] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:28:24.453 [2024-07-10 14:46:36.693485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115343 ] 00:28:24.709 [2024-07-10 14:46:36.812130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:24.709 [2024-07-10 14:46:36.831181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.709 [2024-07-10 14:46:36.872639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:24.709 14:46:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.709 14:46:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:24.709 14:46:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:24.967 14:46:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:25.547 NVMe0n1 00:28:25.547 14:46:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:25.547 14:46:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=115377 00:28:25.547 14:46:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:28:25.547 Running I/O for 10 seconds... 00:28:26.480 14:46:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.741 [2024-07-10 14:46:38.898473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.898931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24180c0 is same with the state(5) to be set 00:28:26.741 [2024-07-10 14:46:38.900491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.741 [2024-07-10 14:46:38.900534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.741 [2024-07-10 14:46:38.900557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.741 [2024-07-10 14:46:38.900568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.741 [2024-07-10 14:46:38.900581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.741 [2024-07-10 14:46:38.900590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.741 [2024-07-10 14:46:38.900602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.741 [2024-07-10 14:46:38.900611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.741 [2024-07-10 14:46:38.900623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.741 [2024-07-10 14:46:38.900632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.741 [2024-07-10 14:46:38.900644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.741 [2024-07-10 14:46:38.900653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-07-10 14:46:38.900673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-07-10 14:46:38.900693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-07-10 14:46:38.900713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.900733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.900753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.900773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.900793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.900813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.900833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.900874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.900906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-07-10 14:46:38.900937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-07-10 14:46:38.900958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-07-10 14:46:38.900979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.900990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-07-10 14:46:38.900999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-07-10 14:46:38.901019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-07-10 14:46:38.901039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-07-10 14:46:38.901059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-07-10 14:46:38.901079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.742 [2024-07-10 14:46:38.901369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.742 [2024-07-10 14:46:38.901378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.743 [2024-07-10 14:46:38.901398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.743 [2024-07-10 14:46:38.901419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.901977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.901993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.902006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.902015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.902026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.902035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.902046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.743 [2024-07-10 14:46:38.902055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.743 [2024-07-10 14:46:38.902066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.744 [2024-07-10 14:46:38.902075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.744 [2024-07-10 14:46:38.902095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.744 [2024-07-10 14:46:38.902115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.744 [2024-07-10 14:46:38.902136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.744 [2024-07-10 14:46:38.902156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.744 [2024-07-10 14:46:38.902177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.744 [2024-07-10 14:46:38.902198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.744 [2024-07-10 14:46:38.902218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.744 [2024-07-10 14:46:38.902238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.744 [2024-07-10 14:46:38.902782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.744 [2024-07-10 14:46:38.902793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.745 [2024-07-10 14:46:38.902802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.902813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.745 [2024-07-10 14:46:38.902825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.902837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.745 [2024-07-10 14:46:38.902846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.902876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.902887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88528 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.902896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.902909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.902917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.902925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88536 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.902934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.902943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.902950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.902958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88544 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.902967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.902976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.902983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.902993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88552 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88560 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88568 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88576 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88584 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88592 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88600 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88608 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88616 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88624 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88632 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88640 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88648 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88656 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.745 [2024-07-10 14:46:38.903464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.745 [2024-07-10 14:46:38.903472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88664 len:8 PRP1 0x0 PRP2 0x0 00:28:26.745 [2024-07-10 14:46:38.903481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.745 [2024-07-10 14:46:38.903524] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6c4ca0 was disconnected and freed. reset controller. 00:28:26.745 [2024-07-10 14:46:38.903774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.745 [2024-07-10 14:46:38.903864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c9650 (9): Bad file descriptor 00:28:26.746 [2024-07-10 14:46:38.903973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.746 [2024-07-10 14:46:38.903995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c9650 with addr=10.0.0.2, port=4420 00:28:26.746 [2024-07-10 14:46:38.904006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c9650 is same with the state(5) to be set 00:28:26.746 [2024-07-10 14:46:38.904028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c9650 (9): Bad file descriptor 00:28:26.746 [2024-07-10 14:46:38.904045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.746 [2024-07-10 14:46:38.904055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.746 [2024-07-10 14:46:38.904065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.746 [2024-07-10 14:46:38.904085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.746 [2024-07-10 14:46:38.904095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.746 14:46:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:28:28.646 [2024-07-10 14:46:40.904345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-07-10 14:46:40.904414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c9650 with addr=10.0.0.2, port=4420 00:28:28.646 [2024-07-10 14:46:40.904431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c9650 is same with the state(5) to be set 00:28:28.646 [2024-07-10 14:46:40.904461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c9650 (9): Bad file descriptor 00:28:28.646 [2024-07-10 14:46:40.904495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.646 [2024-07-10 14:46:40.904507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.646 [2024-07-10 14:46:40.904518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.646 [2024-07-10 14:46:40.904545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.646 [2024-07-10 14:46:40.904557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.646 14:46:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:28:28.646 14:46:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:28.646 14:46:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:29.212 14:46:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:29.212 14:46:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:28:29.212 14:46:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:29.212 14:46:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:29.472 14:46:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:29.472 14:46:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:28:30.864 [2024-07-10 14:46:42.904804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.864 [2024-07-10 14:46:42.904901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c9650 with addr=10.0.0.2, port=4420 00:28:30.864 [2024-07-10 14:46:42.904926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c9650 is same with the state(5) to be set 00:28:30.864 [2024-07-10 14:46:42.904959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c9650 (9): Bad file descriptor 00:28:30.864 [2024-07-10 14:46:42.904979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.864 [2024-07-10 14:46:42.904989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.864 [2024-07-10 14:46:42.905000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.864 [2024-07-10 14:46:42.905026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.864 [2024-07-10 14:46:42.905039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.778 [2024-07-10 14:46:44.905162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.778 [2024-07-10 14:46:44.905247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.778 [2024-07-10 14:46:44.905261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.778 [2024-07-10 14:46:44.905271] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:32.778 [2024-07-10 14:46:44.905311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.712 00:28:33.712 Latency(us) 00:28:33.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.712 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:33.712 Verification LBA range: start 0x0 length 0x4000 00:28:33.712 NVMe0n1 : 8.23 1330.77 5.20 15.55 0.00 94934.97 2189.50 7015926.69 00:28:33.712 =================================================================================================================== 00:28:33.712 Total : 1330.77 5.20 15.55 0.00 94934.97 2189.50 7015926.69 00:28:33.712 0 00:28:34.278 14:46:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:28:34.278 14:46:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:34.278 14:46:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:34.537 14:46:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:34.537 14:46:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:28:34.537 14:46:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:34.537 14:46:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 115377 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 115343 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115343 ']' 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115343 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115343 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:35.124 killing process with pid 115343 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115343' 00:28:35.124 Received shutdown signal, test time was about 9.502525 seconds 00:28:35.124 00:28:35.124 Latency(us) 00:28:35.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.124 =================================================================================================================== 00:28:35.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115343 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115343 00:28:35.124 14:46:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.381 [2024-07-10 14:46:47.567817] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.381 14:46:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=115530 00:28:35.381 14:46:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:35.381 14:46:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 115530 /var/tmp/bdevperf.sock 00:28:35.381 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115530 ']' 00:28:35.381 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:35.381 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:35.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:35.381 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:35.381 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:35.381 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:35.381 [2024-07-10 14:46:47.634236] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:28:35.381 [2024-07-10 14:46:47.634340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115530 ] 00:28:35.641 [2024-07-10 14:46:47.752865] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:35.641 [2024-07-10 14:46:47.770615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.641 [2024-07-10 14:46:47.807870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.641 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:35.641 14:46:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:35.641 14:46:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:36.207 14:46:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:36.465 NVMe0n1 00:28:36.465 14:46:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=115564 00:28:36.465 14:46:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:36.465 14:46:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:28:36.465 Running I/O for 10 seconds... 00:28:37.400 14:46:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.660 [2024-07-10 14:46:49.858085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.660 [2024-07-10 14:46:49.858148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.660 [2024-07-10 14:46:49.858173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.660 [2024-07-10 14:46:49.858185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.660 [2024-07-10 14:46:49.858197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.660 [2024-07-10 14:46:49.858207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.660 [2024-07-10 14:46:49.858219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.660 [2024-07-10 14:46:49.858228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.660 [2024-07-10 14:46:49.858239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.660 [2024-07-10 14:46:49.858249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.660 [2024-07-10 14:46:49.858260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.660 [2024-07-10 14:46:49.858269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.660 [2024-07-10 14:46:49.858293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.660 [2024-07-10 14:46:49.858305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.660 [2024-07-10 14:46:49.858317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.660 [2024-07-10 14:46:49.858326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.660 [2024-07-10 14:46:49.858338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.660 [2024-07-10 14:46:49.858347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.660 [2024-07-10 14:46:49.858358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.660 [2024-07-10 14:46:49.858367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.660 [2024-07-10 14:46:49.858379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.660 [2024-07-10 14:46:49.858388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.660 [2024-07-10 14:46:49.858399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.660 [2024-07-10 14:46:49.858409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.660 [2024-07-10 14:46:49.858420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.660 [2024-07-10 14:46:49.858430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.661 [2024-07-10 14:46:49.858451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.661 [2024-07-10 14:46:49.858471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.661 [2024-07-10 14:46:49.858492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.661 [2024-07-10 14:46:49.858514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.858987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.858996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.661 [2024-07-10 14:46:49.859330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.661 [2024-07-10 14:46:49.859340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.859982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.859993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.860002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.860013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.860022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.860033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.860042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.860053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.860062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.860073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.860083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.860094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.860103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.860114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.860123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.860134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.860143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.860154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.860163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.860174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.860183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.860194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.662 [2024-07-10 14:46:49.860203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.662 [2024-07-10 14:46:49.860215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.663 [2024-07-10 14:46:49.860224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.663 [2024-07-10 14:46:49.860245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.663 [2024-07-10 14:46:49.860267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.663 [2024-07-10 14:46:49.860299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.663 [2024-07-10 14:46:49.860320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.663 [2024-07-10 14:46:49.860340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80208 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80216 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80224 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80232 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80240 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80248 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80256 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80264 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79560 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79568 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79576 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79584 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79592 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79600 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79608 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79616 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.860971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.860978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79624 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.860987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.860997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.861004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.861011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79632 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.861020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.861029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.861036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.861044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79640 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.861053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.861062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.861069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.861077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79648 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.861086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.861095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.861102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.861110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79656 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.861119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.663 [2024-07-10 14:46:49.861128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.663 [2024-07-10 14:46:49.861135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.663 [2024-07-10 14:46:49.861142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79664 len:8 PRP1 0x0 PRP2 0x0 00:28:37.663 [2024-07-10 14:46:49.861151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.664 [2024-07-10 14:46:49.861162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.664 [2024-07-10 14:46:49.861170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.664 [2024-07-10 14:46:49.861180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79672 len:8 PRP1 0x0 PRP2 0x0 00:28:37.664 [2024-07-10 14:46:49.861189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.664 [2024-07-10 14:46:49.861198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.664 [2024-07-10 14:46:49.861205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.664 [2024-07-10 14:46:49.861213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79680 len:8 PRP1 0x0 PRP2 0x0 00:28:37.664 [2024-07-10 14:46:49.861222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.664 [2024-07-10 14:46:49.861266] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10a8ca0 was disconnected and freed. reset controller. 00:28:37.664 [2024-07-10 14:46:49.861546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:37.664 [2024-07-10 14:46:49.861639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ad650 (9): Bad file descriptor 00:28:37.664 [2024-07-10 14:46:49.861746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-07-10 14:46:49.861768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ad650 with addr=10.0.0.2, port=4420 00:28:37.664 [2024-07-10 14:46:49.861779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad650 is same with the state(5) to be set 00:28:37.664 [2024-07-10 14:46:49.861798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ad650 (9): Bad file descriptor 00:28:37.664 [2024-07-10 14:46:49.861814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:37.664 [2024-07-10 14:46:49.861824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:37.664 [2024-07-10 14:46:49.861834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:37.664 [2024-07-10 14:46:49.861854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.664 [2024-07-10 14:46:49.861877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:37.664 14:46:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:28:38.610 [2024-07-10 14:46:50.862024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-07-10 14:46:50.862099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ad650 with addr=10.0.0.2, port=4420 00:28:38.610 [2024-07-10 14:46:50.862117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad650 is same with the state(5) to be set 00:28:38.610 [2024-07-10 14:46:50.862147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ad650 (9): Bad file descriptor 00:28:38.610 [2024-07-10 14:46:50.862167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.610 [2024-07-10 14:46:50.862178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.610 [2024-07-10 14:46:50.862189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.610 [2024-07-10 14:46:50.862218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.610 [2024-07-10 14:46:50.862230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.610 14:46:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.869 [2024-07-10 14:46:51.154951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.127 14:46:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 115564 00:28:39.693 [2024-07-10 14:46:51.881991] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:47.807 00:28:47.807 Latency(us) 00:28:47.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.807 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:47.807 Verification LBA range: start 0x0 length 0x4000 00:28:47.807 NVMe0n1 : 10.01 6259.34 24.45 0.00 0.00 20408.69 2085.24 3019898.88 00:28:47.807 =================================================================================================================== 00:28:47.807 Total : 6259.34 24.45 0.00 0.00 20408.69 2085.24 3019898.88 00:28:47.807 0 00:28:47.807 14:46:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=115674 00:28:47.807 14:46:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:47.807 14:46:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:28:47.807 Running I/O for 10 seconds... 00:28:47.807 14:46:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.807 [2024-07-10 14:47:00.016007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.807 [2024-07-10 14:47:00.016440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.016992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418d90 is same with the state(5) to be set 00:28:47.808 [2024-07-10 14:47:00.017855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.808 [2024-07-10 14:47:00.017897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.808 [2024-07-10 14:47:00.017919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.808 [2024-07-10 14:47:00.017930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.808 [2024-07-10 14:47:00.017942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.017952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.017963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.017972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.017984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.017993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.809 [2024-07-10 14:47:00.018744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.809 [2024-07-10 14:47:00.018756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.018765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.018776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.018785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.018796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.018804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.018816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.018825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.018836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.018845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.018856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.018866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.018877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.018886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.018898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.018907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.018918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.018929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.018941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.018950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.018962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.018971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.018982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.018991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.019013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.019033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.019053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.019073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.019100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.019120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.019141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.019161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.019182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.019203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.019223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.019244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.810 [2024-07-10 14:47:00.019264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.810 [2024-07-10 14:47:00.019579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.810 [2024-07-10 14:47:00.019590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.019990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.019999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.811 [2024-07-10 14:47:00.020298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.811 [2024-07-10 14:47:00.020310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.812 [2024-07-10 14:47:00.020320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.812 [2024-07-10 14:47:00.020340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.812 [2024-07-10 14:47:00.020360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.812 [2024-07-10 14:47:00.020380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.812 [2024-07-10 14:47:00.020400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.812 [2024-07-10 14:47:00.020426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.812 [2024-07-10 14:47:00.020447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.812 [2024-07-10 14:47:00.020467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.812 [2024-07-10 14:47:00.020488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.812 [2024-07-10 14:47:00.020508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.812 [2024-07-10 14:47:00.020528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.812 [2024-07-10 14:47:00.020550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.812 [2024-07-10 14:47:00.020570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:47.812 [2024-07-10 14:47:00.020612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:47.812 [2024-07-10 14:47:00.020620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83264 len:8 PRP1 0x0 PRP2 0x0 00:28:47.812 [2024-07-10 14:47:00.020630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.812 [2024-07-10 14:47:00.020674] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x117c7e0 was disconnected and freed. reset controller. 00:28:47.812 [2024-07-10 14:47:00.020937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.812 [2024-07-10 14:47:00.021017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ad650 (9): Bad file descriptor 00:28:47.812 [2024-07-10 14:47:00.021138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.812 [2024-07-10 14:47:00.021160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ad650 with addr=10.0.0.2, port=4420 00:28:47.812 [2024-07-10 14:47:00.021171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad650 is same with the state(5) to be set 00:28:47.812 [2024-07-10 14:47:00.021190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ad650 (9): Bad file descriptor 00:28:47.812 [2024-07-10 14:47:00.021207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.812 [2024-07-10 14:47:00.021216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.812 [2024-07-10 14:47:00.021226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.812 [2024-07-10 14:47:00.021246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.812 [2024-07-10 14:47:00.021258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.812 14:47:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:28:48.748 [2024-07-10 14:47:01.021431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.748 [2024-07-10 14:47:01.021509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ad650 with addr=10.0.0.2, port=4420 00:28:48.748 [2024-07-10 14:47:01.021527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad650 is same with the state(5) to be set 00:28:48.748 [2024-07-10 14:47:01.021555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ad650 (9): Bad file descriptor 00:28:48.748 [2024-07-10 14:47:01.021575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.748 [2024-07-10 14:47:01.021585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.748 [2024-07-10 14:47:01.021596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.748 [2024-07-10 14:47:01.021624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.748 [2024-07-10 14:47:01.021636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.121 [2024-07-10 14:47:02.021790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.121 [2024-07-10 14:47:02.021868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ad650 with addr=10.0.0.2, port=4420 00:28:50.121 [2024-07-10 14:47:02.021886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad650 is same with the state(5) to be set 00:28:50.121 [2024-07-10 14:47:02.021915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ad650 (9): Bad file descriptor 00:28:50.121 [2024-07-10 14:47:02.021936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.121 [2024-07-10 14:47:02.021947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.121 [2024-07-10 14:47:02.021957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.121 [2024-07-10 14:47:02.021986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.121 [2024-07-10 14:47:02.021998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.055 [2024-07-10 14:47:03.025717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.055 [2024-07-10 14:47:03.025794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ad650 with addr=10.0.0.2, port=4420 00:28:51.055 [2024-07-10 14:47:03.025811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad650 is same with the state(5) to be set 00:28:51.055 [2024-07-10 14:47:03.026073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ad650 (9): Bad file descriptor 00:28:51.055 [2024-07-10 14:47:03.026346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.055 [2024-07-10 14:47:03.026367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.055 [2024-07-10 14:47:03.026380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.055 [2024-07-10 14:47:03.030352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.055 [2024-07-10 14:47:03.030384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.055 14:47:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.055 [2024-07-10 14:47:03.269107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.055 14:47:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 115674 00:28:51.989 [2024-07-10 14:47:04.070645] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:57.314 00:28:57.314 Latency(us) 00:28:57.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.314 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:57.314 Verification LBA range: start 0x0 length 0x4000 00:28:57.314 NVMe0n1 : 10.01 5296.14 20.69 3556.28 0.00 14428.50 930.91 3019898.88 00:28:57.314 =================================================================================================================== 00:28:57.314 Total : 5296.14 20.69 3556.28 0.00 14428.50 0.00 3019898.88 00:28:57.314 0 00:28:57.314 14:47:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 115530 00:28:57.314 14:47:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115530 ']' 00:28:57.314 14:47:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115530 00:28:57.314 14:47:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:28:57.314 14:47:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:57.314 14:47:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115530 00:28:57.314 14:47:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:57.314 14:47:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:57.314 killing process with pid 115530 00:28:57.314 14:47:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115530' 00:28:57.314 Received shutdown signal, test time was about 10.000000 seconds 00:28:57.314 00:28:57.314 Latency(us) 00:28:57.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.314 =================================================================================================================== 00:28:57.314 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.314 14:47:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115530 00:28:57.314 14:47:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115530 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=115797 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 115797 /var/tmp/bdevperf.sock 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115797 ']' 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:57.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:57.314 [2024-07-10 14:47:09.111442] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:28:57.314 [2024-07-10 14:47:09.111579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115797 ] 00:28:57.314 [2024-07-10 14:47:09.238052] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:57.314 [2024-07-10 14:47:09.253015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.314 [2024-07-10 14:47:09.297404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=115806 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 115797 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:28:57.314 14:47:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:28:57.571 14:47:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:57.828 NVMe0n1 00:28:57.828 14:47:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=115865 00:28:57.828 14:47:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:57.828 14:47:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:28:58.084 Running I/O for 10 seconds... 00:28:59.016 14:47:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.291 [2024-07-10 14:47:11.439589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.439996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.291 [2024-07-10 14:47:11.440346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.292 [2024-07-10 14:47:11.440354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.292 [2024-07-10 14:47:11.440362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.292 [2024-07-10 14:47:11.440370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.292 [2024-07-10 14:47:11.440378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.292 [2024-07-10 14:47:11.440386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.292 [2024-07-10 14:47:11.440394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.292 [2024-07-10 14:47:11.440402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c590 is same with the state(5) to be set 00:28:59.292 [2024-07-10 14:47:11.441220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.441972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.441989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.292 [2024-07-10 14:47:11.442549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.292 [2024-07-10 14:47:11.442563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.442597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.442627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.442658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.442689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.442720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.442750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.442782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.442812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.442843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.442874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.442906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.442937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.442969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.442985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.293 [2024-07-10 14:47:11.443817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.293 [2024-07-10 14:47:11.443828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.443838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.443849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.443859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.443870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.443879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.443891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.443900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.443911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.443920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.443932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.443941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.443953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.443967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.443979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.443988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.294 [2024-07-10 14:47:11.444735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.294 [2024-07-10 14:47:11.444746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.295 [2024-07-10 14:47:11.444757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.295 [2024-07-10 14:47:11.444769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.295 [2024-07-10 14:47:11.444778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.295 [2024-07-10 14:47:11.444790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.295 [2024-07-10 14:47:11.444799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.295 [2024-07-10 14:47:11.444844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:59.295 [2024-07-10 14:47:11.444857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:59.295 [2024-07-10 14:47:11.444866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34032 len:8 PRP1 0x0 PRP2 0x0 00:28:59.295 [2024-07-10 14:47:11.444875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.295 [2024-07-10 14:47:11.444928] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x53cca0 was disconnected and freed. reset controller. 00:28:59.295 [2024-07-10 14:47:11.445236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.295 [2024-07-10 14:47:11.445352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x541650 (9): Bad file descriptor 00:28:59.295 [2024-07-10 14:47:11.445495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.295 [2024-07-10 14:47:11.445535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x541650 with addr=10.0.0.2, port=4420 00:28:59.295 [2024-07-10 14:47:11.445549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541650 is same with the state(5) to be set 00:28:59.295 [2024-07-10 14:47:11.445570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x541650 (9): Bad file descriptor 00:28:59.295 [2024-07-10 14:47:11.445587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.295 [2024-07-10 14:47:11.445597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.295 [2024-07-10 14:47:11.445609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.295 [2024-07-10 14:47:11.445638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.295 [2024-07-10 14:47:11.445658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.295 14:47:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 115865 00:29:01.195 [2024-07-10 14:47:13.445864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.195 [2024-07-10 14:47:13.445945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x541650 with addr=10.0.0.2, port=4420 00:29:01.195 [2024-07-10 14:47:13.445962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541650 is same with the state(5) to be set 00:29:01.195 [2024-07-10 14:47:13.445994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x541650 (9): Bad file descriptor 00:29:01.195 [2024-07-10 14:47:13.446028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.195 [2024-07-10 14:47:13.446040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.195 [2024-07-10 14:47:13.446052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.195 [2024-07-10 14:47:13.446080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.195 [2024-07-10 14:47:13.446092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.722 [2024-07-10 14:47:15.446314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.722 [2024-07-10 14:47:15.446391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x541650 with addr=10.0.0.2, port=4420 00:29:03.722 [2024-07-10 14:47:15.446409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541650 is same with the state(5) to be set 00:29:03.722 [2024-07-10 14:47:15.446440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x541650 (9): Bad file descriptor 00:29:03.722 [2024-07-10 14:47:15.446461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.722 [2024-07-10 14:47:15.446472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.722 [2024-07-10 14:47:15.446483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.722 [2024-07-10 14:47:15.446511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.722 [2024-07-10 14:47:15.446523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.665 [2024-07-10 14:47:17.446663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.665 [2024-07-10 14:47:17.446763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.665 [2024-07-10 14:47:17.446785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.665 [2024-07-10 14:47:17.446802] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:05.665 [2024-07-10 14:47:17.446841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.231 00:29:06.231 Latency(us) 00:29:06.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.231 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:29:06.231 NVMe0n1 : 8.21 2451.78 9.58 15.58 0.00 51817.04 3544.90 7015926.69 00:29:06.231 =================================================================================================================== 00:29:06.231 Total : 2451.78 9.58 15.58 0.00 51817.04 3544.90 7015926.69 00:29:06.231 0 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:06.231 Attaching 5 probes... 00:29:06.231 1532.321445: reset bdev controller NVMe0 00:29:06.231 1532.496208: reconnect bdev controller NVMe0 00:29:06.231 3532.786973: reconnect delay bdev controller NVMe0 00:29:06.231 3532.823982: reconnect bdev controller NVMe0 00:29:06.231 5533.236418: reconnect delay bdev controller NVMe0 00:29:06.231 5533.264695: reconnect bdev controller NVMe0 00:29:06.231 7533.700594: reconnect delay bdev controller NVMe0 00:29:06.231 7533.742301: reconnect bdev controller NVMe0 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 115806 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 115797 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115797 ']' 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115797 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115797 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:06.231 killing process with pid 115797 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115797' 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115797 00:29:06.231 Received shutdown signal, test time was about 8.271343 seconds 00:29:06.231 00:29:06.231 Latency(us) 00:29:06.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.231 =================================================================================================================== 00:29:06.231 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:06.231 14:47:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115797 00:29:06.489 14:47:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:06.749 14:47:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:29:06.749 14:47:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:29:06.749 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:06.749 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:07.008 rmmod nvme_tcp 00:29:07.008 rmmod nvme_fabrics 00:29:07.008 rmmod nvme_keyring 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 115265 ']' 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 115265 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115265 ']' 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115265 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115265 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:07.008 killing process with pid 115265 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115265' 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115265 00:29:07.008 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115265 00:29:07.267 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:07.267 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:07.267 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:07.267 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:07.267 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:07.267 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.267 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:07.267 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.267 14:47:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:07.267 00:29:07.267 real 0m44.903s 00:29:07.267 user 2m13.362s 00:29:07.267 sys 0m4.522s 00:29:07.267 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:07.267 14:47:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.267 ************************************ 00:29:07.267 END TEST nvmf_timeout 00:29:07.267 ************************************ 00:29:07.267 14:47:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:07.267 14:47:19 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:29:07.267 14:47:19 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:07.267 14:47:19 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:07.267 14:47:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:07.267 14:47:19 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:07.267 ************************************ 00:29:07.267 END TEST nvmf_tcp 00:29:07.267 ************************************ 00:29:07.267 00:29:07.267 real 21m21.558s 00:29:07.267 user 63m53.674s 00:29:07.267 sys 4m23.172s 00:29:07.267 14:47:19 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:07.267 14:47:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:07.267 14:47:19 -- common/autotest_common.sh@1142 -- # return 0 00:29:07.267 14:47:19 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:07.267 14:47:19 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:07.267 14:47:19 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:07.267 14:47:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.267 14:47:19 -- common/autotest_common.sh@10 -- # set +x 00:29:07.267 ************************************ 00:29:07.267 START TEST spdkcli_nvmf_tcp 00:29:07.267 ************************************ 00:29:07.267 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:07.267 * Looking for test storage... 00:29:07.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:07.525 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=116075 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 116075 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 116075 ']' 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:07.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:07.526 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:07.526 [2024-07-10 14:47:19.646264] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:29:07.526 [2024-07-10 14:47:19.646376] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116075 ] 00:29:07.526 [2024-07-10 14:47:19.768183] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:07.526 [2024-07-10 14:47:19.781056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:07.783 [2024-07-10 14:47:19.820396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.784 [2024-07-10 14:47:19.820409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.784 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:07.784 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:07.784 14:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:07.784 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:07.784 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:07.784 14:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:07.784 14:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:07.784 14:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:07.784 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:07.784 14:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:07.784 14:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:07.784 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:07.784 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:07.784 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:07.784 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:07.784 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:07.784 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:07.784 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:07.784 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:07.784 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:07.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:07.784 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:07.784 ' 00:29:11.064 [2024-07-10 14:47:22.679955] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.993 [2024-07-10 14:47:23.945002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:14.519 [2024-07-10 14:47:26.338604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:16.420 [2024-07-10 14:47:28.400063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:17.793 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:17.793 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:17.793 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:17.793 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:17.793 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:17.793 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:17.793 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:17.793 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:17.793 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:17.793 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:17.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:17.793 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:17.793 14:47:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:17.793 14:47:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:17.793 14:47:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.050 14:47:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:18.050 14:47:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:18.050 14:47:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.050 14:47:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:18.050 14:47:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:29:18.308 14:47:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:18.308 14:47:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:18.308 14:47:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:18.308 14:47:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:18.308 14:47:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.308 14:47:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:18.308 14:47:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:18.308 14:47:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.308 14:47:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:18.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:18.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:18.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:18.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:18.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:18.308 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:18.308 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:18.308 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:18.308 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:18.308 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:18.308 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:18.308 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:18.308 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:18.308 ' 00:29:24.860 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:24.860 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:24.860 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:24.860 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:24.860 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:24.860 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:24.860 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:24.860 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:24.860 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:24.860 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:24.860 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:24.860 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:24.860 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:24.860 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 116075 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 116075 ']' 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 116075 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116075 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:24.860 killing process with pid 116075 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116075' 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 116075 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 116075 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 116075 ']' 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 116075 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 116075 ']' 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 116075 00:29:24.860 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (116075) - No such process 00:29:24.860 Process with pid 116075 is not found 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 116075 is not found' 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:24.860 00:29:24.860 real 0m16.779s 00:29:24.860 user 0m36.570s 00:29:24.860 sys 0m0.864s 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:24.860 ************************************ 00:29:24.860 END TEST spdkcli_nvmf_tcp 00:29:24.860 14:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.860 ************************************ 00:29:24.860 14:47:36 -- common/autotest_common.sh@1142 -- # return 0 00:29:24.860 14:47:36 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:24.860 14:47:36 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:24.860 14:47:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.860 14:47:36 -- common/autotest_common.sh@10 -- # set +x 00:29:24.860 ************************************ 00:29:24.860 START TEST nvmf_identify_passthru 00:29:24.860 ************************************ 00:29:24.860 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:24.860 * Looking for test storage... 00:29:24.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:24.860 14:47:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:24.860 14:47:36 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.860 14:47:36 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.860 14:47:36 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.860 14:47:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.860 14:47:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.860 14:47:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.860 14:47:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:24.860 14:47:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:24.860 14:47:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:24.860 14:47:36 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.860 14:47:36 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.860 14:47:36 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.860 14:47:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.860 14:47:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.860 14:47:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.860 14:47:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:24.860 14:47:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.860 14:47:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.860 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:24.860 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:24.860 Cannot find device "nvmf_tgt_br" 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:24.860 Cannot find device "nvmf_tgt_br2" 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:24.860 Cannot find device "nvmf_tgt_br" 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:24.860 Cannot find device "nvmf_tgt_br2" 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:24.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:24.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:24.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:29:24.860 00:29:24.860 --- 10.0.0.2 ping statistics --- 00:29:24.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.860 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:24.860 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:24.860 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:29:24.860 00:29:24.860 --- 10.0.0.3 ping statistics --- 00:29:24.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.860 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:24.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:29:24.860 00:29:24.860 --- 10.0.0.1 ping statistics --- 00:29:24.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.860 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.860 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:29:24.861 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:24.861 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.861 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:24.861 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:24.861 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.861 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:24.861 14:47:36 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:24.861 14:47:36 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:24.861 14:47:36 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:24.861 14:47:36 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:29:24.861 14:47:36 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:29:24.861 14:47:36 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:29:24.861 14:47:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:24.861 14:47:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:24.861 14:47:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:24.861 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:29:24.861 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:24.861 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:24.861 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:25.117 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:29:25.117 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:25.117 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:25.117 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:25.117 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:25.117 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:25.117 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:25.117 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=116546 00:29:25.117 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:25.117 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.117 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 116546 00:29:25.117 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 116546 ']' 00:29:25.117 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.117 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:25.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.118 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.118 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:25.118 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:25.118 [2024-07-10 14:47:37.330442] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:29:25.118 [2024-07-10 14:47:37.330548] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.375 [2024-07-10 14:47:37.457369] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:25.375 [2024-07-10 14:47:37.476955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.375 [2024-07-10 14:47:37.523305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.375 [2024-07-10 14:47:37.523396] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.375 [2024-07-10 14:47:37.523419] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.375 [2024-07-10 14:47:37.523437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.375 [2024-07-10 14:47:37.523452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.375 [2024-07-10 14:47:37.523541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.375 [2024-07-10 14:47:37.523667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.375 [2024-07-10 14:47:37.524564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.375 [2024-07-10 14:47:37.524578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.375 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:25.375 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:29:25.375 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:25.375 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.375 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:25.375 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.375 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:25.375 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.375 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:25.375 [2024-07-10 14:47:37.647100] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:25.375 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.375 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:25.375 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.375 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:25.375 [2024-07-10 14:47:37.660695] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.633 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:25.633 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:25.633 Nvme0n1 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.633 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.633 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.633 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:25.633 [2024-07-10 14:47:37.806570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.633 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:25.633 [ 00:29:25.633 { 00:29:25.633 "allow_any_host": true, 00:29:25.633 "hosts": [], 00:29:25.633 "listen_addresses": [], 00:29:25.633 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:25.633 "subtype": "Discovery" 00:29:25.633 }, 00:29:25.633 { 00:29:25.633 "allow_any_host": true, 00:29:25.633 "hosts": [], 00:29:25.633 "listen_addresses": [ 00:29:25.633 { 00:29:25.633 "adrfam": "IPv4", 00:29:25.633 "traddr": "10.0.0.2", 00:29:25.633 "trsvcid": "4420", 00:29:25.633 "trtype": "TCP" 00:29:25.633 } 00:29:25.633 ], 00:29:25.633 "max_cntlid": 65519, 00:29:25.633 "max_namespaces": 1, 00:29:25.633 "min_cntlid": 1, 00:29:25.633 "model_number": "SPDK bdev Controller", 00:29:25.633 "namespaces": [ 00:29:25.633 { 00:29:25.633 "bdev_name": "Nvme0n1", 00:29:25.633 "name": "Nvme0n1", 00:29:25.633 "nguid": "43AB99C517064FCE805E5CC8106CF850", 00:29:25.633 "nsid": 1, 00:29:25.633 "uuid": "43ab99c5-1706-4fce-805e-5cc8106cf850" 00:29:25.633 } 00:29:25.633 ], 00:29:25.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.633 "serial_number": "SPDK00000000000001", 00:29:25.633 "subtype": "NVMe" 00:29:25.633 } 00:29:25.633 ] 00:29:25.633 14:47:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.633 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:25.633 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:25.633 14:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:25.890 14:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:29:25.890 14:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:25.890 14:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:25.890 14:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:26.147 14:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:29:26.147 14:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:29:26.147 14:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:29:26.147 14:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:26.147 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.147 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:26.147 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.147 14:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:26.147 14:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:26.147 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:26.147 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:26.147 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:26.147 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:26.147 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:26.147 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:26.147 rmmod nvme_tcp 00:29:26.147 rmmod nvme_fabrics 00:29:26.147 rmmod nvme_keyring 00:29:26.147 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:26.147 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:26.147 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:26.147 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 116546 ']' 00:29:26.147 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 116546 00:29:26.147 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 116546 ']' 00:29:26.147 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 116546 00:29:26.147 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:29:26.147 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:26.147 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116546 00:29:26.147 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:26.147 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:26.147 killing process with pid 116546 00:29:26.147 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116546' 00:29:26.147 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 116546 00:29:26.147 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 116546 00:29:26.405 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:26.405 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:26.405 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:26.405 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:26.405 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:26.405 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.405 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:26.405 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.405 14:47:38 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:26.405 00:29:26.405 real 0m2.288s 00:29:26.405 user 0m4.465s 00:29:26.405 sys 0m0.703s 00:29:26.405 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:26.405 14:47:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:26.405 ************************************ 00:29:26.405 END TEST nvmf_identify_passthru 00:29:26.405 ************************************ 00:29:26.405 14:47:38 -- common/autotest_common.sh@1142 -- # return 0 00:29:26.405 14:47:38 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:26.405 14:47:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:26.405 14:47:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.405 14:47:38 -- common/autotest_common.sh@10 -- # set +x 00:29:26.405 ************************************ 00:29:26.405 START TEST nvmf_dif 00:29:26.405 ************************************ 00:29:26.405 14:47:38 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:26.664 * Looking for test storage... 00:29:26.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:26.664 14:47:38 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:26.664 14:47:38 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:26.664 14:47:38 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:26.664 14:47:38 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:26.664 14:47:38 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.664 14:47:38 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.664 14:47:38 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.664 14:47:38 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:26.664 14:47:38 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:26.664 14:47:38 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:26.664 14:47:38 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:26.664 14:47:38 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:26.664 14:47:38 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:26.664 14:47:38 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:26.664 14:47:38 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.664 14:47:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:26.664 14:47:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:26.665 Cannot find device "nvmf_tgt_br" 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@155 -- # true 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:26.665 Cannot find device "nvmf_tgt_br2" 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@156 -- # true 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:26.665 Cannot find device "nvmf_tgt_br" 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@158 -- # true 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:26.665 Cannot find device "nvmf_tgt_br2" 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@159 -- # true 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:26.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@162 -- # true 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:26.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@163 -- # true 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:26.665 14:47:38 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:26.923 14:47:38 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:26.923 14:47:38 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:26.923 14:47:38 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:26.923 14:47:38 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:26.923 14:47:38 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:26.923 14:47:38 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:26.923 14:47:38 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:26.923 14:47:38 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:26.923 14:47:39 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:26.923 14:47:39 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:26.923 14:47:39 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:26.923 14:47:39 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:26.923 14:47:39 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:26.923 14:47:39 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:26.923 14:47:39 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:26.923 14:47:39 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:26.923 14:47:39 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:26.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:29:26.923 00:29:26.923 --- 10.0.0.2 ping statistics --- 00:29:26.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.923 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:29:26.923 14:47:39 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:26.923 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:26.923 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:29:26.923 00:29:26.923 --- 10.0.0.3 ping statistics --- 00:29:26.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.923 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:29:26.923 14:47:39 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:26.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:29:26.924 00:29:26.924 --- 10.0.0.1 ping statistics --- 00:29:26.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.924 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:29:26.924 14:47:39 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.924 14:47:39 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:29:26.924 14:47:39 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:26.924 14:47:39 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:27.181 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:27.181 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:27.181 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:27.181 14:47:39 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.181 14:47:39 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:27.181 14:47:39 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:27.181 14:47:39 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.181 14:47:39 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:27.181 14:47:39 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:27.439 14:47:39 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:27.439 14:47:39 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:27.439 14:47:39 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:27.439 14:47:39 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:27.439 14:47:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:27.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.439 14:47:39 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=116882 00:29:27.439 14:47:39 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 116882 00:29:27.439 14:47:39 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 116882 ']' 00:29:27.439 14:47:39 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.439 14:47:39 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:27.439 14:47:39 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.439 14:47:39 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.439 14:47:39 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.439 14:47:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:27.439 [2024-07-10 14:47:39.548818] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:29:27.439 [2024-07-10 14:47:39.548938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.439 [2024-07-10 14:47:39.671240] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:27.439 [2024-07-10 14:47:39.684230] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.439 [2024-07-10 14:47:39.720499] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.439 [2024-07-10 14:47:39.720558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.439 [2024-07-10 14:47:39.720570] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.439 [2024-07-10 14:47:39.720578] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.439 [2024-07-10 14:47:39.720585] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.439 [2024-07-10 14:47:39.720615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.697 14:47:39 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:27.697 14:47:39 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:29:27.697 14:47:39 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:27.697 14:47:39 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:27.697 14:47:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:27.697 14:47:39 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.697 14:47:39 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:27.697 14:47:39 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:27.697 14:47:39 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.697 14:47:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:27.697 [2024-07-10 14:47:39.841495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.697 14:47:39 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.697 14:47:39 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:27.697 14:47:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:27.697 14:47:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.697 14:47:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:27.697 ************************************ 00:29:27.697 START TEST fio_dif_1_default 00:29:27.697 ************************************ 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:27.697 bdev_null0 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:27.697 [2024-07-10 14:47:39.893637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:27.697 { 00:29:27.697 "params": { 00:29:27.697 "name": "Nvme$subsystem", 00:29:27.697 "trtype": "$TEST_TRANSPORT", 00:29:27.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.697 "adrfam": "ipv4", 00:29:27.697 "trsvcid": "$NVMF_PORT", 00:29:27.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.697 "hdgst": ${hdgst:-false}, 00:29:27.697 "ddgst": ${ddgst:-false} 00:29:27.697 }, 00:29:27.697 "method": "bdev_nvme_attach_controller" 00:29:27.697 } 00:29:27.697 EOF 00:29:27.697 )") 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:27.697 "params": { 00:29:27.697 "name": "Nvme0", 00:29:27.697 "trtype": "tcp", 00:29:27.697 "traddr": "10.0.0.2", 00:29:27.697 "adrfam": "ipv4", 00:29:27.697 "trsvcid": "4420", 00:29:27.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:27.697 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:27.697 "hdgst": false, 00:29:27.697 "ddgst": false 00:29:27.697 }, 00:29:27.697 "method": "bdev_nvme_attach_controller" 00:29:27.697 }' 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:27.697 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:27.698 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:27.698 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:27.698 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:27.698 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:27.698 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:27.698 14:47:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:27.956 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:27.956 fio-3.35 00:29:27.956 Starting 1 thread 00:29:40.152 00:29:40.152 filename0: (groupid=0, jobs=1): err= 0: pid=116946: Wed Jul 10 14:47:50 2024 00:29:40.152 read: IOPS=1293, BW=5173KiB/s (5297kB/s)(50.7MiB/10027msec) 00:29:40.152 slat (usec): min=7, max=325, avg=10.03, stdev= 6.58 00:29:40.152 clat (usec): min=452, max=42059, avg=3061.80, stdev=9744.60 00:29:40.152 lat (usec): min=460, max=42081, avg=3071.83, stdev=9745.19 00:29:40.152 clat percentiles (usec): 00:29:40.152 | 1.00th=[ 461], 5.00th=[ 474], 10.00th=[ 482], 20.00th=[ 490], 00:29:40.152 | 30.00th=[ 502], 40.00th=[ 515], 50.00th=[ 545], 60.00th=[ 594], 00:29:40.152 | 70.00th=[ 627], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[40633], 00:29:40.152 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[42206], 00:29:40.152 | 99.99th=[42206] 00:29:40.152 bw ( KiB/s): min= 1696, max=12544, per=100.00%, avg=5184.40, stdev=2966.92, samples=20 00:29:40.152 iops : min= 424, max= 3136, avg=1296.10, stdev=741.73, samples=20 00:29:40.152 lat (usec) : 500=29.06%, 750=63.64%, 1000=0.89% 00:29:40.152 lat (msec) : 2=0.19%, 4=0.05%, 50=6.17% 00:29:40.152 cpu : usr=89.75%, sys=8.84%, ctx=84, majf=0, minf=0 00:29:40.152 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:40.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.152 issued rwts: total=12968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.152 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:40.152 00:29:40.152 Run status group 0 (all jobs): 00:29:40.152 READ: bw=5173KiB/s (5297kB/s), 5173KiB/s-5173KiB/s (5297kB/s-5297kB/s), io=50.7MiB (53.1MB), run=10027-10027msec 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:40.152 ************************************ 00:29:40.152 END TEST fio_dif_1_default 00:29:40.152 ************************************ 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.152 00:29:40.152 real 0m10.898s 00:29:40.152 user 0m9.559s 00:29:40.152 sys 0m1.113s 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:40.152 14:47:50 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:40.152 14:47:50 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:40.152 14:47:50 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:40.152 14:47:50 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:40.152 14:47:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:40.152 ************************************ 00:29:40.152 START TEST fio_dif_1_multi_subsystems 00:29:40.152 ************************************ 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:40.152 bdev_null0 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:40.152 [2024-07-10 14:47:50.834529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:40.152 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:40.153 bdev_null1 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.153 { 00:29:40.153 "params": { 00:29:40.153 "name": "Nvme$subsystem", 00:29:40.153 "trtype": "$TEST_TRANSPORT", 00:29:40.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.153 "adrfam": "ipv4", 00:29:40.153 "trsvcid": "$NVMF_PORT", 00:29:40.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.153 "hdgst": ${hdgst:-false}, 00:29:40.153 "ddgst": ${ddgst:-false} 00:29:40.153 }, 00:29:40.153 "method": "bdev_nvme_attach_controller" 00:29:40.153 } 00:29:40.153 EOF 00:29:40.153 )") 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.153 { 00:29:40.153 "params": { 00:29:40.153 "name": "Nvme$subsystem", 00:29:40.153 "trtype": "$TEST_TRANSPORT", 00:29:40.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.153 "adrfam": "ipv4", 00:29:40.153 "trsvcid": "$NVMF_PORT", 00:29:40.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.153 "hdgst": ${hdgst:-false}, 00:29:40.153 "ddgst": ${ddgst:-false} 00:29:40.153 }, 00:29:40.153 "method": "bdev_nvme_attach_controller" 00:29:40.153 } 00:29:40.153 EOF 00:29:40.153 )") 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:40.153 "params": { 00:29:40.153 "name": "Nvme0", 00:29:40.153 "trtype": "tcp", 00:29:40.153 "traddr": "10.0.0.2", 00:29:40.153 "adrfam": "ipv4", 00:29:40.153 "trsvcid": "4420", 00:29:40.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:40.153 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:40.153 "hdgst": false, 00:29:40.153 "ddgst": false 00:29:40.153 }, 00:29:40.153 "method": "bdev_nvme_attach_controller" 00:29:40.153 },{ 00:29:40.153 "params": { 00:29:40.153 "name": "Nvme1", 00:29:40.153 "trtype": "tcp", 00:29:40.153 "traddr": "10.0.0.2", 00:29:40.153 "adrfam": "ipv4", 00:29:40.153 "trsvcid": "4420", 00:29:40.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:40.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:40.153 "hdgst": false, 00:29:40.153 "ddgst": false 00:29:40.153 }, 00:29:40.153 "method": "bdev_nvme_attach_controller" 00:29:40.153 }' 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:40.153 14:47:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:40.153 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:40.153 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:40.153 fio-3.35 00:29:40.153 Starting 2 threads 00:29:50.112 00:29:50.112 filename0: (groupid=0, jobs=1): err= 0: pid=117100: Wed Jul 10 14:48:01 2024 00:29:50.112 read: IOPS=378, BW=1513KiB/s (1550kB/s)(14.8MiB/10022msec) 00:29:50.112 slat (nsec): min=7793, max=92885, avg=12065.19, stdev=10309.37 00:29:50.112 clat (usec): min=467, max=43155, avg=10531.28, stdev=17365.79 00:29:50.112 lat (usec): min=475, max=43182, avg=10543.35, stdev=17367.76 00:29:50.112 clat percentiles (usec): 00:29:50.112 | 1.00th=[ 490], 5.00th=[ 523], 10.00th=[ 545], 20.00th=[ 611], 00:29:50.112 | 30.00th=[ 635], 40.00th=[ 652], 50.00th=[ 668], 60.00th=[ 693], 00:29:50.112 | 70.00th=[ 816], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:29:50.112 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:29:50.112 | 99.99th=[43254] 00:29:50.112 bw ( KiB/s): min= 448, max= 8160, per=57.31%, avg=1515.20, stdev=2045.80, samples=20 00:29:50.112 iops : min= 112, max= 2040, avg=378.80, stdev=511.45, samples=20 00:29:50.112 lat (usec) : 500=2.00%, 750=65.74%, 1000=3.06% 00:29:50.112 lat (msec) : 2=4.51%, 4=0.21%, 10=0.11%, 50=24.37% 00:29:50.112 cpu : usr=93.61%, sys=5.52%, ctx=17, majf=0, minf=0 00:29:50.112 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:50.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.113 issued rwts: total=3792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.113 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:50.113 filename1: (groupid=0, jobs=1): err= 0: pid=117101: Wed Jul 10 14:48:01 2024 00:29:50.113 read: IOPS=283, BW=1133KiB/s (1160kB/s)(11.1MiB/10041msec) 00:29:50.113 slat (nsec): min=7791, max=84342, avg=12464.24, stdev=10211.87 00:29:50.113 clat (usec): min=463, max=42863, avg=14079.20, stdev=19027.76 00:29:50.113 lat (usec): min=470, max=42891, avg=14091.66, stdev=19028.72 00:29:50.113 clat percentiles (usec): 00:29:50.113 | 1.00th=[ 482], 5.00th=[ 502], 10.00th=[ 529], 20.00th=[ 570], 00:29:50.113 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 717], 60.00th=[ 1139], 00:29:50.113 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:29:50.113 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:29:50.113 | 99.99th=[42730] 00:29:50.113 bw ( KiB/s): min= 416, max= 2688, per=42.97%, avg=1136.20, stdev=628.28, samples=20 00:29:50.113 iops : min= 104, max= 672, avg=284.05, stdev=157.07, samples=20 00:29:50.113 lat (usec) : 500=4.22%, 750=47.82%, 1000=3.66% 00:29:50.113 lat (msec) : 2=10.55%, 4=0.56%, 10=0.14%, 50=33.05% 00:29:50.113 cpu : usr=93.87%, sys=5.36%, ctx=20, majf=0, minf=0 00:29:50.113 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:50.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.113 issued rwts: total=2844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.113 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:50.113 00:29:50.113 Run status group 0 (all jobs): 00:29:50.113 READ: bw=2644KiB/s (2707kB/s), 1133KiB/s-1513KiB/s (1160kB/s-1550kB/s), io=25.9MiB (27.2MB), run=10022-10041msec 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:50.113 ************************************ 00:29:50.113 END TEST fio_dif_1_multi_subsystems 00:29:50.113 ************************************ 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.113 00:29:50.113 real 0m11.084s 00:29:50.113 user 0m19.524s 00:29:50.113 sys 0m1.349s 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:50.113 14:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:50.113 14:48:01 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:50.113 14:48:01 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:50.113 14:48:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:50.113 14:48:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.113 14:48:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:50.113 ************************************ 00:29:50.113 START TEST fio_dif_rand_params 00:29:50.113 ************************************ 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.113 bdev_null0 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.113 [2024-07-10 14:48:01.963925] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:50.113 { 00:29:50.113 "params": { 00:29:50.113 "name": "Nvme$subsystem", 00:29:50.113 "trtype": "$TEST_TRANSPORT", 00:29:50.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.113 "adrfam": "ipv4", 00:29:50.113 "trsvcid": "$NVMF_PORT", 00:29:50.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.113 "hdgst": ${hdgst:-false}, 00:29:50.113 "ddgst": ${ddgst:-false} 00:29:50.113 }, 00:29:50.113 "method": "bdev_nvme_attach_controller" 00:29:50.113 } 00:29:50.113 EOF 00:29:50.113 )") 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:50.113 14:48:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:50.113 "params": { 00:29:50.113 "name": "Nvme0", 00:29:50.113 "trtype": "tcp", 00:29:50.113 "traddr": "10.0.0.2", 00:29:50.113 "adrfam": "ipv4", 00:29:50.113 "trsvcid": "4420", 00:29:50.114 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:50.114 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:50.114 "hdgst": false, 00:29:50.114 "ddgst": false 00:29:50.114 }, 00:29:50.114 "method": "bdev_nvme_attach_controller" 00:29:50.114 }' 00:29:50.114 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:50.114 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:50.114 14:48:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:50.114 14:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:50.114 14:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:50.114 14:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:50.114 14:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:50.114 14:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:50.114 14:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:50.114 14:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.114 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:50.114 ... 00:29:50.114 fio-3.35 00:29:50.114 Starting 3 threads 00:29:55.372 00:29:55.372 filename0: (groupid=0, jobs=1): err= 0: pid=117248: Wed Jul 10 14:48:07 2024 00:29:55.372 read: IOPS=187, BW=23.4MiB/s (24.5MB/s)(117MiB/5005msec) 00:29:55.372 slat (nsec): min=5120, max=79342, avg=21584.90, stdev=9891.70 00:29:55.372 clat (usec): min=4492, max=55103, avg=16001.66, stdev=11542.70 00:29:55.372 lat (usec): min=4504, max=55142, avg=16023.24, stdev=11543.75 00:29:55.372 clat percentiles (usec): 00:29:55.372 | 1.00th=[ 7898], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11338], 00:29:55.372 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12649], 00:29:55.372 | 70.00th=[13304], 80.00th=[14353], 90.00th=[21103], 95.00th=[52691], 00:29:55.372 | 99.00th=[53740], 99.50th=[53740], 99.90th=[55313], 99.95th=[55313], 00:29:55.372 | 99.99th=[55313] 00:29:55.372 bw ( KiB/s): min=18944, max=32256, per=29.54%, avg=23910.40, stdev=4554.27, samples=10 00:29:55.372 iops : min= 148, max= 252, avg=186.80, stdev=35.58, samples=10 00:29:55.372 lat (msec) : 10=6.41%, 20=82.91%, 50=2.56%, 100=8.12% 00:29:55.372 cpu : usr=89.73%, sys=7.89%, ctx=6, majf=0, minf=0 00:29:55.372 IO depths : 1=5.6%, 2=94.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.372 issued rwts: total=936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.372 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:55.372 filename0: (groupid=0, jobs=1): err= 0: pid=117249: Wed Jul 10 14:48:07 2024 00:29:55.372 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(134MiB/5004msec) 00:29:55.372 slat (nsec): min=8119, max=76969, avg=25950.77, stdev=9409.08 00:29:55.372 clat (usec): min=4429, max=56249, avg=13958.17, stdev=4947.69 00:29:55.373 lat (usec): min=4447, max=56268, avg=13984.12, stdev=4948.66 00:29:55.373 clat percentiles (usec): 00:29:55.373 | 1.00th=[ 4555], 5.00th=[ 8225], 10.00th=[ 9241], 20.00th=[ 9896], 00:29:55.373 | 30.00th=[10814], 40.00th=[13566], 50.00th=[14222], 60.00th=[14746], 00:29:55.373 | 70.00th=[15401], 80.00th=[16450], 90.00th=[19268], 95.00th=[20579], 00:29:55.373 | 99.00th=[28443], 99.50th=[44303], 99.90th=[48497], 99.95th=[56361], 00:29:55.373 | 99.99th=[56361] 00:29:55.373 bw ( KiB/s): min=20736, max=33536, per=33.52%, avg=27136.00, stdev=4820.01, samples=9 00:29:55.373 iops : min= 162, max= 262, avg=212.00, stdev=37.66, samples=9 00:29:55.373 lat (msec) : 10=20.88%, 20=72.23%, 50=6.80%, 100=0.09% 00:29:55.373 cpu : usr=89.69%, sys=7.76%, ctx=18, majf=0, minf=0 00:29:55.373 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.373 issued rwts: total=1073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.373 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:55.373 filename0: (groupid=0, jobs=1): err= 0: pid=117250: Wed Jul 10 14:48:07 2024 00:29:55.373 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(145MiB/5003msec) 00:29:55.373 slat (nsec): min=5181, max=75087, avg=19325.96, stdev=8665.85 00:29:55.373 clat (usec): min=6815, max=55642, avg=12954.97, stdev=6145.73 00:29:55.373 lat (usec): min=6827, max=55666, avg=12974.30, stdev=6147.56 00:29:55.373 clat percentiles (usec): 00:29:55.373 | 1.00th=[ 7373], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8586], 00:29:55.373 | 30.00th=[10028], 40.00th=[11994], 50.00th=[12649], 60.00th=[13304], 00:29:55.373 | 70.00th=[13960], 80.00th=[14615], 90.00th=[16188], 95.00th=[18482], 00:29:55.373 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54789], 99.95th=[55837], 00:29:55.373 | 99.99th=[55837] 00:29:55.373 bw ( KiB/s): min=18432, max=37120, per=35.53%, avg=28757.33, stdev=6193.14, samples=9 00:29:55.373 iops : min= 144, max= 290, avg=224.67, stdev=48.38, samples=9 00:29:55.373 lat (msec) : 10=29.50%, 20=66.26%, 50=3.03%, 100=1.21% 00:29:55.373 cpu : usr=89.62%, sys=8.12%, ctx=8, majf=0, minf=0 00:29:55.373 IO depths : 1=3.2%, 2=96.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.373 issued rwts: total=1156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.373 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:55.373 00:29:55.373 Run status group 0 (all jobs): 00:29:55.373 READ: bw=79.0MiB/s (82.9MB/s), 23.4MiB/s-28.9MiB/s (24.5MB/s-30.3MB/s), io=396MiB (415MB), run=5003-5005msec 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.632 bdev_null0 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.632 [2024-07-10 14:48:07.836348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.632 bdev_null1 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:55.632 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.633 bdev_null2 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:55.633 { 00:29:55.633 "params": { 00:29:55.633 "name": "Nvme$subsystem", 00:29:55.633 "trtype": "$TEST_TRANSPORT", 00:29:55.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:55.633 "adrfam": "ipv4", 00:29:55.633 "trsvcid": "$NVMF_PORT", 00:29:55.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:55.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:55.633 "hdgst": ${hdgst:-false}, 00:29:55.633 "ddgst": ${ddgst:-false} 00:29:55.633 }, 00:29:55.633 "method": "bdev_nvme_attach_controller" 00:29:55.633 } 00:29:55.633 EOF 00:29:55.633 )") 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:55.633 { 00:29:55.633 "params": { 00:29:55.633 "name": "Nvme$subsystem", 00:29:55.633 "trtype": "$TEST_TRANSPORT", 00:29:55.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:55.633 "adrfam": "ipv4", 00:29:55.633 "trsvcid": "$NVMF_PORT", 00:29:55.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:55.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:55.633 "hdgst": ${hdgst:-false}, 00:29:55.633 "ddgst": ${ddgst:-false} 00:29:55.633 }, 00:29:55.633 "method": "bdev_nvme_attach_controller" 00:29:55.633 } 00:29:55.633 EOF 00:29:55.633 )") 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:55.633 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:55.891 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:55.891 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:55.891 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:55.891 { 00:29:55.891 "params": { 00:29:55.891 "name": "Nvme$subsystem", 00:29:55.891 "trtype": "$TEST_TRANSPORT", 00:29:55.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:55.891 "adrfam": "ipv4", 00:29:55.891 "trsvcid": "$NVMF_PORT", 00:29:55.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:55.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:55.891 "hdgst": ${hdgst:-false}, 00:29:55.891 "ddgst": ${ddgst:-false} 00:29:55.891 }, 00:29:55.891 "method": "bdev_nvme_attach_controller" 00:29:55.891 } 00:29:55.891 EOF 00:29:55.891 )") 00:29:55.891 14:48:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:55.891 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:55.892 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:55.892 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:55.892 14:48:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:55.892 "params": { 00:29:55.892 "name": "Nvme0", 00:29:55.892 "trtype": "tcp", 00:29:55.892 "traddr": "10.0.0.2", 00:29:55.892 "adrfam": "ipv4", 00:29:55.892 "trsvcid": "4420", 00:29:55.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.892 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:55.892 "hdgst": false, 00:29:55.892 "ddgst": false 00:29:55.892 }, 00:29:55.892 "method": "bdev_nvme_attach_controller" 00:29:55.892 },{ 00:29:55.892 "params": { 00:29:55.892 "name": "Nvme1", 00:29:55.892 "trtype": "tcp", 00:29:55.892 "traddr": "10.0.0.2", 00:29:55.892 "adrfam": "ipv4", 00:29:55.892 "trsvcid": "4420", 00:29:55.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:55.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:55.892 "hdgst": false, 00:29:55.892 "ddgst": false 00:29:55.892 }, 00:29:55.892 "method": "bdev_nvme_attach_controller" 00:29:55.892 },{ 00:29:55.892 "params": { 00:29:55.892 "name": "Nvme2", 00:29:55.892 "trtype": "tcp", 00:29:55.892 "traddr": "10.0.0.2", 00:29:55.892 "adrfam": "ipv4", 00:29:55.892 "trsvcid": "4420", 00:29:55.892 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:55.892 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:55.892 "hdgst": false, 00:29:55.892 "ddgst": false 00:29:55.892 }, 00:29:55.892 "method": "bdev_nvme_attach_controller" 00:29:55.892 }' 00:29:55.892 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:55.892 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:55.892 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:55.892 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:55.892 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:55.892 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:55.892 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:55.892 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:55.892 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:55.892 14:48:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:55.892 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:55.892 ... 00:29:55.892 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:55.892 ... 00:29:55.892 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:55.892 ... 00:29:55.892 fio-3.35 00:29:55.892 Starting 24 threads 00:30:08.094 00:30:08.094 filename0: (groupid=0, jobs=1): err= 0: pid=117340: Wed Jul 10 14:48:18 2024 00:30:08.094 read: IOPS=181, BW=727KiB/s (744kB/s)(7320KiB/10069msec) 00:30:08.094 slat (usec): min=3, max=8055, avg=40.49, stdev=375.38 00:30:08.094 clat (msec): min=17, max=203, avg=87.61, stdev=29.88 00:30:08.094 lat (msec): min=17, max=203, avg=87.65, stdev=29.90 00:30:08.094 clat percentiles (msec): 00:30:08.094 | 1.00th=[ 27], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 63], 00:30:08.094 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 94], 00:30:08.094 | 70.00th=[ 97], 80.00th=[ 109], 90.00th=[ 126], 95.00th=[ 144], 00:30:08.094 | 99.00th=[ 180], 99.50th=[ 180], 99.90th=[ 203], 99.95th=[ 203], 00:30:08.094 | 99.99th=[ 203] 00:30:08.094 bw ( KiB/s): min= 512, max= 1015, per=4.36%, avg=725.15, stdev=127.78, samples=20 00:30:08.094 iops : min= 128, max= 253, avg=181.25, stdev=31.86, samples=20 00:30:08.094 lat (msec) : 20=0.87%, 50=5.90%, 100=64.43%, 250=28.80% 00:30:08.094 cpu : usr=32.31%, sys=1.44%, ctx=891, majf=0, minf=9 00:30:08.094 IO depths : 1=2.0%, 2=4.0%, 4=11.8%, 8=71.1%, 16=11.0%, 32=0.0%, >=64=0.0% 00:30:08.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.094 complete : 0=0.0%, 4=90.3%, 8=4.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.094 issued rwts: total=1830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.094 filename0: (groupid=0, jobs=1): err= 0: pid=117341: Wed Jul 10 14:48:18 2024 00:30:08.094 read: IOPS=203, BW=815KiB/s (835kB/s)(8212KiB/10075msec) 00:30:08.094 slat (usec): min=4, max=8111, avg=26.02, stdev=308.18 00:30:08.094 clat (msec): min=3, max=147, avg=78.36, stdev=25.76 00:30:08.094 lat (msec): min=3, max=147, avg=78.39, stdev=25.76 00:30:08.094 clat percentiles (msec): 00:30:08.094 | 1.00th=[ 5], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 61], 00:30:08.094 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:30:08.094 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 121], 00:30:08.094 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:30:08.094 | 99.99th=[ 148] 00:30:08.094 bw ( KiB/s): min= 640, max= 1584, per=4.90%, avg=814.80, stdev=200.48, samples=20 00:30:08.094 iops : min= 160, max= 396, avg=203.70, stdev=50.12, samples=20 00:30:08.094 lat (msec) : 4=0.63%, 10=2.48%, 50=10.13%, 100=70.53%, 250=16.22% 00:30:08.094 cpu : usr=32.83%, sys=1.58%, ctx=909, majf=0, minf=9 00:30:08.094 IO depths : 1=0.6%, 2=1.3%, 4=7.2%, 8=77.8%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:08.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.094 complete : 0=0.0%, 4=89.4%, 8=6.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.094 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.094 filename0: (groupid=0, jobs=1): err= 0: pid=117342: Wed Jul 10 14:48:18 2024 00:30:08.094 read: IOPS=195, BW=780KiB/s (799kB/s)(7860KiB/10076msec) 00:30:08.094 slat (usec): min=5, max=8049, avg=35.25, stdev=182.72 00:30:08.094 clat (msec): min=10, max=185, avg=81.80, stdev=26.13 00:30:08.094 lat (msec): min=10, max=185, avg=81.84, stdev=26.14 00:30:08.094 clat percentiles (msec): 00:30:08.094 | 1.00th=[ 13], 5.00th=[ 45], 10.00th=[ 58], 20.00th=[ 62], 00:30:08.094 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 84], 00:30:08.094 | 70.00th=[ 92], 80.00th=[ 100], 90.00th=[ 116], 95.00th=[ 130], 00:30:08.094 | 99.00th=[ 157], 99.50th=[ 176], 99.90th=[ 186], 99.95th=[ 186], 00:30:08.094 | 99.99th=[ 186] 00:30:08.094 bw ( KiB/s): min= 512, max= 1282, per=4.69%, avg=779.70, stdev=152.28, samples=20 00:30:08.094 iops : min= 128, max= 320, avg=194.90, stdev=37.98, samples=20 00:30:08.094 lat (msec) : 20=1.63%, 50=5.90%, 100=72.67%, 250=19.80% 00:30:08.094 cpu : usr=34.49%, sys=1.53%, ctx=1176, majf=0, minf=9 00:30:08.094 IO depths : 1=0.7%, 2=1.4%, 4=8.1%, 8=76.7%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:08.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.094 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.094 issued rwts: total=1965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.094 filename0: (groupid=0, jobs=1): err= 0: pid=117343: Wed Jul 10 14:48:18 2024 00:30:08.094 read: IOPS=184, BW=739KiB/s (757kB/s)(7424KiB/10042msec) 00:30:08.094 slat (usec): min=7, max=4052, avg=18.58, stdev=95.06 00:30:08.094 clat (msec): min=32, max=168, avg=86.38, stdev=26.27 00:30:08.094 lat (msec): min=32, max=168, avg=86.40, stdev=26.27 00:30:08.094 clat percentiles (msec): 00:30:08.094 | 1.00th=[ 43], 5.00th=[ 50], 10.00th=[ 55], 20.00th=[ 62], 00:30:08.094 | 30.00th=[ 71], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 90], 00:30:08.094 | 70.00th=[ 101], 80.00th=[ 112], 90.00th=[ 125], 95.00th=[ 132], 00:30:08.094 | 99.00th=[ 148], 99.50th=[ 161], 99.90th=[ 169], 99.95th=[ 169], 00:30:08.094 | 99.99th=[ 169] 00:30:08.094 bw ( KiB/s): min= 512, max= 1120, per=4.43%, avg=736.90, stdev=150.58, samples=20 00:30:08.094 iops : min= 128, max= 280, avg=184.20, stdev=37.64, samples=20 00:30:08.094 lat (msec) : 50=5.60%, 100=66.11%, 250=28.29% 00:30:08.094 cpu : usr=39.92%, sys=1.63%, ctx=1244, majf=0, minf=9 00:30:08.094 IO depths : 1=0.5%, 2=1.0%, 4=6.4%, 8=78.6%, 16=13.5%, 32=0.0%, >=64=0.0% 00:30:08.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.094 complete : 0=0.0%, 4=89.3%, 8=6.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.094 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.094 filename0: (groupid=0, jobs=1): err= 0: pid=117344: Wed Jul 10 14:48:18 2024 00:30:08.094 read: IOPS=163, BW=655KiB/s (670kB/s)(6576KiB/10046msec) 00:30:08.094 slat (usec): min=4, max=8056, avg=35.29, stdev=383.73 00:30:08.094 clat (msec): min=36, max=364, avg=97.57, stdev=36.43 00:30:08.094 lat (msec): min=36, max=364, avg=97.61, stdev=36.43 00:30:08.094 clat percentiles (msec): 00:30:08.094 | 1.00th=[ 48], 5.00th=[ 58], 10.00th=[ 64], 20.00th=[ 71], 00:30:08.094 | 30.00th=[ 77], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 100], 00:30:08.094 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 148], 00:30:08.094 | 99.00th=[ 190], 99.50th=[ 330], 99.90th=[ 363], 99.95th=[ 363], 00:30:08.094 | 99.99th=[ 363] 00:30:08.094 bw ( KiB/s): min= 384, max= 896, per=3.92%, avg=651.20, stdev=132.87, samples=20 00:30:08.094 iops : min= 96, max= 224, avg=162.80, stdev=33.22, samples=20 00:30:08.094 lat (msec) : 50=3.28%, 100=58.52%, 250=37.23%, 500=0.97% 00:30:08.094 cpu : usr=36.52%, sys=1.39%, ctx=1176, majf=0, minf=9 00:30:08.094 IO depths : 1=2.6%, 2=5.6%, 4=15.4%, 8=66.1%, 16=10.3%, 32=0.0%, >=64=0.0% 00:30:08.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.094 complete : 0=0.0%, 4=91.4%, 8=3.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.094 issued rwts: total=1644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.094 filename0: (groupid=0, jobs=1): err= 0: pid=117345: Wed Jul 10 14:48:18 2024 00:30:08.094 read: IOPS=156, BW=627KiB/s (642kB/s)(6296KiB/10039msec) 00:30:08.094 slat (nsec): min=3855, max=64080, avg=14001.86, stdev=6223.23 00:30:08.094 clat (msec): min=48, max=202, avg=101.92, stdev=25.78 00:30:08.094 lat (msec): min=48, max=202, avg=101.94, stdev=25.78 00:30:08.094 clat percentiles (msec): 00:30:08.094 | 1.00th=[ 61], 5.00th=[ 66], 10.00th=[ 72], 20.00th=[ 81], 00:30:08.094 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 108], 00:30:08.094 | 70.00th=[ 117], 80.00th=[ 122], 90.00th=[ 132], 95.00th=[ 144], 00:30:08.094 | 99.00th=[ 203], 99.50th=[ 203], 99.90th=[ 203], 99.95th=[ 203], 00:30:08.094 | 99.99th=[ 203] 00:30:08.094 bw ( KiB/s): min= 384, max= 769, per=3.74%, avg=622.80, stdev=101.50, samples=20 00:30:08.094 iops : min= 96, max= 192, avg=155.65, stdev=25.37, samples=20 00:30:08.095 lat (msec) : 50=0.19%, 100=52.41%, 250=47.40% 00:30:08.095 cpu : usr=33.38%, sys=1.22%, ctx=932, majf=0, minf=9 00:30:08.095 IO depths : 1=2.8%, 2=6.8%, 4=18.0%, 8=62.5%, 16=10.0%, 32=0.0%, >=64=0.0% 00:30:08.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 complete : 0=0.0%, 4=92.3%, 8=2.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 issued rwts: total=1574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.095 filename0: (groupid=0, jobs=1): err= 0: pid=117346: Wed Jul 10 14:48:18 2024 00:30:08.095 read: IOPS=157, BW=629KiB/s (644kB/s)(6316KiB/10037msec) 00:30:08.095 slat (usec): min=5, max=8050, avg=48.19, stdev=388.67 00:30:08.095 clat (msec): min=47, max=186, avg=101.16, stdev=25.36 00:30:08.095 lat (msec): min=47, max=186, avg=101.21, stdev=25.35 00:30:08.095 clat percentiles (msec): 00:30:08.095 | 1.00th=[ 48], 5.00th=[ 63], 10.00th=[ 70], 20.00th=[ 78], 00:30:08.095 | 30.00th=[ 87], 40.00th=[ 96], 50.00th=[ 102], 60.00th=[ 107], 00:30:08.095 | 70.00th=[ 116], 80.00th=[ 125], 90.00th=[ 132], 95.00th=[ 144], 00:30:08.095 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 188], 99.95th=[ 188], 00:30:08.095 | 99.99th=[ 188] 00:30:08.095 bw ( KiB/s): min= 472, max= 896, per=3.78%, avg=629.15, stdev=107.20, samples=20 00:30:08.095 iops : min= 118, max= 224, avg=157.25, stdev=26.77, samples=20 00:30:08.095 lat (msec) : 50=1.58%, 100=46.93%, 250=51.49% 00:30:08.095 cpu : usr=39.55%, sys=1.71%, ctx=1314, majf=0, minf=9 00:30:08.095 IO depths : 1=2.9%, 2=6.5%, 4=17.5%, 8=62.9%, 16=10.1%, 32=0.0%, >=64=0.0% 00:30:08.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 issued rwts: total=1579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.095 filename0: (groupid=0, jobs=1): err= 0: pid=117347: Wed Jul 10 14:48:18 2024 00:30:08.095 read: IOPS=154, BW=617KiB/s (632kB/s)(6196KiB/10041msec) 00:30:08.095 slat (usec): min=4, max=8050, avg=35.68, stdev=317.77 00:30:08.095 clat (msec): min=47, max=214, avg=103.47, stdev=27.22 00:30:08.095 lat (msec): min=47, max=214, avg=103.50, stdev=27.21 00:30:08.095 clat percentiles (msec): 00:30:08.095 | 1.00th=[ 55], 5.00th=[ 69], 10.00th=[ 72], 20.00th=[ 80], 00:30:08.095 | 30.00th=[ 86], 40.00th=[ 96], 50.00th=[ 104], 60.00th=[ 108], 00:30:08.095 | 70.00th=[ 113], 80.00th=[ 124], 90.00th=[ 132], 95.00th=[ 157], 00:30:08.095 | 99.00th=[ 190], 99.50th=[ 203], 99.90th=[ 215], 99.95th=[ 215], 00:30:08.095 | 99.99th=[ 215] 00:30:08.095 bw ( KiB/s): min= 432, max= 768, per=3.68%, avg=612.75, stdev=94.12, samples=20 00:30:08.095 iops : min= 108, max= 192, avg=153.15, stdev=23.53, samples=20 00:30:08.095 lat (msec) : 50=0.39%, 100=47.32%, 250=52.29% 00:30:08.095 cpu : usr=36.58%, sys=1.59%, ctx=1050, majf=0, minf=9 00:30:08.095 IO depths : 1=3.1%, 2=6.5%, 4=16.4%, 8=64.2%, 16=9.9%, 32=0.0%, >=64=0.0% 00:30:08.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 issued rwts: total=1549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.095 filename1: (groupid=0, jobs=1): err= 0: pid=117348: Wed Jul 10 14:48:18 2024 00:30:08.095 read: IOPS=154, BW=617KiB/s (632kB/s)(6188KiB/10032msec) 00:30:08.095 slat (usec): min=4, max=8057, avg=30.54, stdev=353.57 00:30:08.095 clat (msec): min=45, max=215, avg=103.49, stdev=29.98 00:30:08.095 lat (msec): min=45, max=215, avg=103.52, stdev=29.99 00:30:08.095 clat percentiles (msec): 00:30:08.095 | 1.00th=[ 47], 5.00th=[ 61], 10.00th=[ 71], 20.00th=[ 74], 00:30:08.095 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 101], 60.00th=[ 108], 00:30:08.095 | 70.00th=[ 120], 80.00th=[ 128], 90.00th=[ 136], 95.00th=[ 157], 00:30:08.095 | 99.00th=[ 215], 99.50th=[ 215], 99.90th=[ 215], 99.95th=[ 215], 00:30:08.095 | 99.99th=[ 215] 00:30:08.095 bw ( KiB/s): min= 384, max= 800, per=3.68%, avg=611.90, stdev=102.97, samples=20 00:30:08.095 iops : min= 96, max= 200, avg=152.95, stdev=25.76, samples=20 00:30:08.095 lat (msec) : 50=2.20%, 100=48.22%, 250=49.58% 00:30:08.095 cpu : usr=31.39%, sys=1.19%, ctx=853, majf=0, minf=9 00:30:08.095 IO depths : 1=2.5%, 2=5.2%, 4=14.9%, 8=66.6%, 16=10.7%, 32=0.0%, >=64=0.0% 00:30:08.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 issued rwts: total=1547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.095 filename1: (groupid=0, jobs=1): err= 0: pid=117349: Wed Jul 10 14:48:18 2024 00:30:08.095 read: IOPS=166, BW=666KiB/s (682kB/s)(6696KiB/10054msec) 00:30:08.095 slat (usec): min=4, max=8069, avg=29.54, stdev=340.67 00:30:08.095 clat (msec): min=27, max=191, avg=95.86, stdev=27.72 00:30:08.095 lat (msec): min=27, max=191, avg=95.89, stdev=27.73 00:30:08.095 clat percentiles (msec): 00:30:08.095 | 1.00th=[ 43], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 72], 00:30:08.095 | 30.00th=[ 82], 40.00th=[ 87], 50.00th=[ 96], 60.00th=[ 100], 00:30:08.095 | 70.00th=[ 108], 80.00th=[ 117], 90.00th=[ 132], 95.00th=[ 144], 00:30:08.095 | 99.00th=[ 190], 99.50th=[ 190], 99.90th=[ 192], 99.95th=[ 192], 00:30:08.095 | 99.99th=[ 192] 00:30:08.095 bw ( KiB/s): min= 512, max= 865, per=3.98%, avg=662.85, stdev=105.51, samples=20 00:30:08.095 iops : min= 128, max= 216, avg=165.70, stdev=26.35, samples=20 00:30:08.095 lat (msec) : 50=3.23%, 100=57.41%, 250=39.37% 00:30:08.095 cpu : usr=31.61%, sys=1.24%, ctx=894, majf=0, minf=9 00:30:08.095 IO depths : 1=2.1%, 2=4.3%, 4=12.0%, 8=70.0%, 16=11.6%, 32=0.0%, >=64=0.0% 00:30:08.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 complete : 0=0.0%, 4=90.8%, 8=4.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 issued rwts: total=1674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.095 filename1: (groupid=0, jobs=1): err= 0: pid=117350: Wed Jul 10 14:48:18 2024 00:30:08.095 read: IOPS=187, BW=752KiB/s (770kB/s)(7572KiB/10070msec) 00:30:08.095 slat (usec): min=6, max=8115, avg=53.35, stdev=412.92 00:30:08.095 clat (msec): min=32, max=200, avg=84.69, stdev=25.57 00:30:08.095 lat (msec): min=32, max=200, avg=84.74, stdev=25.57 00:30:08.095 clat percentiles (msec): 00:30:08.095 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 62], 00:30:08.095 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 90], 00:30:08.095 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 121], 95.00th=[ 132], 00:30:08.095 | 99.00th=[ 148], 99.50th=[ 186], 99.90th=[ 201], 99.95th=[ 201], 00:30:08.095 | 99.99th=[ 201] 00:30:08.095 bw ( KiB/s): min= 512, max= 992, per=4.51%, avg=750.40, stdev=122.91, samples=20 00:30:08.095 iops : min= 128, max= 248, avg=187.60, stdev=30.73, samples=20 00:30:08.095 lat (msec) : 50=7.08%, 100=71.05%, 250=21.87% 00:30:08.095 cpu : usr=36.27%, sys=1.45%, ctx=1008, majf=0, minf=9 00:30:08.095 IO depths : 1=0.2%, 2=0.5%, 4=6.3%, 8=79.0%, 16=14.1%, 32=0.0%, >=64=0.0% 00:30:08.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 complete : 0=0.0%, 4=89.0%, 8=7.1%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 issued rwts: total=1893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.095 filename1: (groupid=0, jobs=1): err= 0: pid=117351: Wed Jul 10 14:48:18 2024 00:30:08.095 read: IOPS=185, BW=742KiB/s (760kB/s)(7456KiB/10046msec) 00:30:08.095 slat (usec): min=4, max=8053, avg=20.97, stdev=186.41 00:30:08.095 clat (msec): min=38, max=186, avg=86.11, stdev=24.85 00:30:08.095 lat (msec): min=38, max=186, avg=86.13, stdev=24.85 00:30:08.095 clat percentiles (msec): 00:30:08.095 | 1.00th=[ 45], 5.00th=[ 54], 10.00th=[ 58], 20.00th=[ 66], 00:30:08.095 | 30.00th=[ 71], 40.00th=[ 78], 50.00th=[ 82], 60.00th=[ 88], 00:30:08.095 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 121], 95.00th=[ 132], 00:30:08.095 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 188], 99.95th=[ 188], 00:30:08.095 | 99.99th=[ 188] 00:30:08.095 bw ( KiB/s): min= 504, max= 976, per=4.45%, avg=739.20, stdev=101.01, samples=20 00:30:08.095 iops : min= 126, max= 244, avg=184.80, stdev=25.25, samples=20 00:30:08.095 lat (msec) : 50=3.81%, 100=70.98%, 250=25.21% 00:30:08.095 cpu : usr=40.19%, sys=1.47%, ctx=1124, majf=0, minf=9 00:30:08.095 IO depths : 1=1.4%, 2=3.5%, 4=11.0%, 8=72.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:08.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 complete : 0=0.0%, 4=90.6%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.095 filename1: (groupid=0, jobs=1): err= 0: pid=117352: Wed Jul 10 14:48:18 2024 00:30:08.095 read: IOPS=189, BW=757KiB/s (775kB/s)(7608KiB/10049msec) 00:30:08.095 slat (usec): min=5, max=4049, avg=20.84, stdev=131.14 00:30:08.095 clat (msec): min=40, max=167, avg=84.25, stdev=23.09 00:30:08.095 lat (msec): min=40, max=167, avg=84.27, stdev=23.09 00:30:08.095 clat percentiles (msec): 00:30:08.095 | 1.00th=[ 44], 5.00th=[ 50], 10.00th=[ 57], 20.00th=[ 65], 00:30:08.095 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 81], 60.00th=[ 86], 00:30:08.095 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 118], 95.00th=[ 130], 00:30:08.095 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 169], 00:30:08.095 | 99.99th=[ 169] 00:30:08.095 bw ( KiB/s): min= 600, max= 896, per=4.55%, avg=756.55, stdev=88.02, samples=20 00:30:08.095 iops : min= 150, max= 224, avg=189.10, stdev=21.98, samples=20 00:30:08.095 lat (msec) : 50=5.73%, 100=72.82%, 250=21.45% 00:30:08.095 cpu : usr=41.03%, sys=1.60%, ctx=1327, majf=0, minf=9 00:30:08.095 IO depths : 1=0.8%, 2=1.7%, 4=7.5%, 8=77.2%, 16=12.8%, 32=0.0%, >=64=0.0% 00:30:08.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 complete : 0=0.0%, 4=89.3%, 8=6.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.095 issued rwts: total=1902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.095 filename1: (groupid=0, jobs=1): err= 0: pid=117353: Wed Jul 10 14:48:18 2024 00:30:08.095 read: IOPS=194, BW=778KiB/s (796kB/s)(7840KiB/10082msec) 00:30:08.095 slat (usec): min=4, max=8057, avg=32.81, stdev=265.26 00:30:08.095 clat (msec): min=2, max=214, avg=81.95, stdev=32.18 00:30:08.095 lat (msec): min=2, max=214, avg=81.99, stdev=32.18 00:30:08.095 clat percentiles (msec): 00:30:08.095 | 1.00th=[ 4], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 57], 00:30:08.095 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 87], 00:30:08.095 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 122], 95.00th=[ 131], 00:30:08.095 | 99.00th=[ 197], 99.50th=[ 215], 99.90th=[ 215], 99.95th=[ 215], 00:30:08.095 | 99.99th=[ 215] 00:30:08.096 bw ( KiB/s): min= 512, max= 1408, per=4.68%, avg=777.60, stdev=218.13, samples=20 00:30:08.096 iops : min= 128, max= 352, avg=194.40, stdev=54.53, samples=20 00:30:08.096 lat (msec) : 4=1.63%, 10=0.82%, 20=0.82%, 50=11.17%, 100=59.54% 00:30:08.096 lat (msec) : 250=26.02% 00:30:08.096 cpu : usr=39.06%, sys=1.69%, ctx=1218, majf=0, minf=9 00:30:08.096 IO depths : 1=1.6%, 2=3.4%, 4=11.7%, 8=71.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:30:08.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 complete : 0=0.0%, 4=90.2%, 8=4.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 issued rwts: total=1960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.096 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.096 filename1: (groupid=0, jobs=1): err= 0: pid=117354: Wed Jul 10 14:48:18 2024 00:30:08.096 read: IOPS=157, BW=631KiB/s (646kB/s)(6320KiB/10023msec) 00:30:08.096 slat (nsec): min=4584, max=79019, avg=14251.96, stdev=7608.45 00:30:08.096 clat (msec): min=47, max=179, avg=101.38, stdev=28.01 00:30:08.096 lat (msec): min=47, max=179, avg=101.39, stdev=28.01 00:30:08.096 clat percentiles (msec): 00:30:08.096 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 72], 20.00th=[ 73], 00:30:08.096 | 30.00th=[ 84], 40.00th=[ 94], 50.00th=[ 99], 60.00th=[ 108], 00:30:08.096 | 70.00th=[ 110], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 157], 00:30:08.096 | 99.00th=[ 180], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 180], 00:30:08.096 | 99.99th=[ 180] 00:30:08.096 bw ( KiB/s): min= 384, max= 864, per=3.76%, avg=625.70, stdev=102.61, samples=20 00:30:08.096 iops : min= 96, max= 216, avg=156.35, stdev=25.70, samples=20 00:30:08.096 lat (msec) : 50=1.77%, 100=50.13%, 250=48.10% 00:30:08.096 cpu : usr=31.64%, sys=1.21%, ctx=875, majf=0, minf=9 00:30:08.096 IO depths : 1=2.5%, 2=5.5%, 4=14.9%, 8=66.2%, 16=10.9%, 32=0.0%, >=64=0.0% 00:30:08.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 complete : 0=0.0%, 4=91.5%, 8=3.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 issued rwts: total=1580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.096 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.096 filename1: (groupid=0, jobs=1): err= 0: pid=117355: Wed Jul 10 14:48:18 2024 00:30:08.096 read: IOPS=181, BW=725KiB/s (742kB/s)(7276KiB/10035msec) 00:30:08.096 slat (usec): min=8, max=8065, avg=30.56, stdev=250.01 00:30:08.096 clat (msec): min=40, max=171, avg=88.02, stdev=24.49 00:30:08.096 lat (msec): min=40, max=171, avg=88.05, stdev=24.49 00:30:08.096 clat percentiles (msec): 00:30:08.096 | 1.00th=[ 50], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 66], 00:30:08.096 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 92], 00:30:08.096 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 124], 95.00th=[ 136], 00:30:08.096 | 99.00th=[ 153], 99.50th=[ 171], 99.90th=[ 171], 99.95th=[ 171], 00:30:08.096 | 99.99th=[ 171] 00:30:08.096 bw ( KiB/s): min= 512, max= 896, per=4.34%, avg=721.10, stdev=101.78, samples=20 00:30:08.096 iops : min= 128, max= 224, avg=180.25, stdev=25.44, samples=20 00:30:08.096 lat (msec) : 50=1.81%, 100=69.76%, 250=28.42% 00:30:08.096 cpu : usr=39.23%, sys=1.64%, ctx=1131, majf=0, minf=9 00:30:08.096 IO depths : 1=1.4%, 2=2.8%, 4=10.2%, 8=73.5%, 16=12.1%, 32=0.0%, >=64=0.0% 00:30:08.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 issued rwts: total=1819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.096 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.096 filename2: (groupid=0, jobs=1): err= 0: pid=117356: Wed Jul 10 14:48:18 2024 00:30:08.096 read: IOPS=175, BW=701KiB/s (718kB/s)(7048KiB/10052msec) 00:30:08.096 slat (usec): min=5, max=8055, avg=26.97, stdev=214.49 00:30:08.096 clat (msec): min=32, max=169, avg=91.04, stdev=24.36 00:30:08.096 lat (msec): min=32, max=169, avg=91.06, stdev=24.37 00:30:08.096 clat percentiles (msec): 00:30:08.096 | 1.00th=[ 42], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 72], 00:30:08.096 | 30.00th=[ 78], 40.00th=[ 84], 50.00th=[ 91], 60.00th=[ 96], 00:30:08.096 | 70.00th=[ 102], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 136], 00:30:08.096 | 99.00th=[ 163], 99.50th=[ 163], 99.90th=[ 169], 99.95th=[ 169], 00:30:08.096 | 99.99th=[ 169] 00:30:08.096 bw ( KiB/s): min= 512, max= 896, per=4.20%, avg=698.15, stdev=101.85, samples=20 00:30:08.096 iops : min= 128, max= 224, avg=174.50, stdev=25.44, samples=20 00:30:08.096 lat (msec) : 50=5.16%, 100=64.64%, 250=30.19% 00:30:08.096 cpu : usr=41.55%, sys=1.61%, ctx=1069, majf=0, minf=9 00:30:08.096 IO depths : 1=2.1%, 2=4.8%, 4=14.1%, 8=67.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:30:08.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 issued rwts: total=1762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.096 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.096 filename2: (groupid=0, jobs=1): err= 0: pid=117357: Wed Jul 10 14:48:18 2024 00:30:08.096 read: IOPS=189, BW=759KiB/s (777kB/s)(7628KiB/10052msec) 00:30:08.096 slat (usec): min=5, max=8060, avg=35.57, stdev=318.65 00:30:08.096 clat (msec): min=35, max=151, avg=84.08, stdev=23.70 00:30:08.096 lat (msec): min=35, max=151, avg=84.11, stdev=23.70 00:30:08.096 clat percentiles (msec): 00:30:08.096 | 1.00th=[ 44], 5.00th=[ 53], 10.00th=[ 60], 20.00th=[ 65], 00:30:08.096 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 85], 00:30:08.096 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 117], 95.00th=[ 136], 00:30:08.096 | 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 153], 00:30:08.096 | 99.99th=[ 153] 00:30:08.096 bw ( KiB/s): min= 512, max= 944, per=4.55%, avg=756.05, stdev=116.32, samples=20 00:30:08.096 iops : min= 128, max= 236, avg=189.00, stdev=29.08, samples=20 00:30:08.096 lat (msec) : 50=3.67%, 100=74.62%, 250=21.71% 00:30:08.096 cpu : usr=38.52%, sys=1.59%, ctx=1153, majf=0, minf=9 00:30:08.096 IO depths : 1=1.0%, 2=2.0%, 4=8.5%, 8=76.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:08.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 complete : 0=0.0%, 4=89.7%, 8=5.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 issued rwts: total=1907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.096 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.096 filename2: (groupid=0, jobs=1): err= 0: pid=117358: Wed Jul 10 14:48:18 2024 00:30:08.096 read: IOPS=154, BW=619KiB/s (633kB/s)(6208KiB/10037msec) 00:30:08.096 slat (usec): min=3, max=8051, avg=39.93, stdev=291.40 00:30:08.096 clat (msec): min=42, max=188, avg=103.10, stdev=23.82 00:30:08.096 lat (msec): min=43, max=188, avg=103.14, stdev=23.82 00:30:08.096 clat percentiles (msec): 00:30:08.096 | 1.00th=[ 50], 5.00th=[ 69], 10.00th=[ 72], 20.00th=[ 83], 00:30:08.096 | 30.00th=[ 92], 40.00th=[ 99], 50.00th=[ 103], 60.00th=[ 107], 00:30:08.096 | 70.00th=[ 112], 80.00th=[ 124], 90.00th=[ 133], 95.00th=[ 146], 00:30:08.096 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 190], 99.95th=[ 190], 00:30:08.096 | 99.99th=[ 190] 00:30:08.096 bw ( KiB/s): min= 480, max= 769, per=3.69%, avg=614.00, stdev=91.84, samples=20 00:30:08.096 iops : min= 120, max= 192, avg=153.45, stdev=22.94, samples=20 00:30:08.096 lat (msec) : 50=1.35%, 100=42.78%, 250=55.86% 00:30:08.096 cpu : usr=38.00%, sys=1.71%, ctx=1285, majf=0, minf=9 00:30:08.096 IO depths : 1=3.7%, 2=8.1%, 4=19.7%, 8=59.5%, 16=9.0%, 32=0.0%, >=64=0.0% 00:30:08.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 complete : 0=0.0%, 4=92.4%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 issued rwts: total=1552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.096 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.096 filename2: (groupid=0, jobs=1): err= 0: pid=117359: Wed Jul 10 14:48:18 2024 00:30:08.096 read: IOPS=182, BW=729KiB/s (746kB/s)(7312KiB/10036msec) 00:30:08.096 slat (usec): min=3, max=8057, avg=40.90, stdev=375.27 00:30:08.096 clat (msec): min=13, max=203, avg=87.54, stdev=28.76 00:30:08.096 lat (msec): min=13, max=203, avg=87.58, stdev=28.77 00:30:08.096 clat percentiles (msec): 00:30:08.096 | 1.00th=[ 24], 5.00th=[ 47], 10.00th=[ 53], 20.00th=[ 61], 00:30:08.096 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 94], 00:30:08.096 | 70.00th=[ 103], 80.00th=[ 115], 90.00th=[ 126], 95.00th=[ 133], 00:30:08.096 | 99.00th=[ 153], 99.50th=[ 165], 99.90th=[ 203], 99.95th=[ 203], 00:30:08.096 | 99.99th=[ 203] 00:30:08.096 bw ( KiB/s): min= 512, max= 1192, per=4.36%, avg=724.40, stdev=184.60, samples=20 00:30:08.096 iops : min= 128, max= 298, avg=181.10, stdev=46.15, samples=20 00:30:08.096 lat (msec) : 20=0.88%, 50=6.78%, 100=60.18%, 250=32.17% 00:30:08.096 cpu : usr=39.25%, sys=1.52%, ctx=1357, majf=0, minf=9 00:30:08.096 IO depths : 1=2.8%, 2=6.1%, 4=15.9%, 8=65.0%, 16=10.1%, 32=0.0%, >=64=0.0% 00:30:08.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 complete : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 issued rwts: total=1828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.096 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.096 filename2: (groupid=0, jobs=1): err= 0: pid=117360: Wed Jul 10 14:48:18 2024 00:30:08.096 read: IOPS=159, BW=637KiB/s (652kB/s)(6400KiB/10044msec) 00:30:08.096 slat (usec): min=4, max=8052, avg=54.09, stdev=436.64 00:30:08.096 clat (msec): min=47, max=191, avg=100.02, stdev=25.40 00:30:08.096 lat (msec): min=47, max=191, avg=100.08, stdev=25.40 00:30:08.096 clat percentiles (msec): 00:30:08.096 | 1.00th=[ 51], 5.00th=[ 64], 10.00th=[ 72], 20.00th=[ 75], 00:30:08.096 | 30.00th=[ 84], 40.00th=[ 92], 50.00th=[ 96], 60.00th=[ 106], 00:30:08.096 | 70.00th=[ 110], 80.00th=[ 121], 90.00th=[ 134], 95.00th=[ 146], 00:30:08.096 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 192], 00:30:08.096 | 99.99th=[ 192] 00:30:08.096 bw ( KiB/s): min= 464, max= 808, per=3.81%, avg=633.00, stdev=113.55, samples=20 00:30:08.096 iops : min= 116, max= 202, avg=158.20, stdev=28.39, samples=20 00:30:08.096 lat (msec) : 50=0.75%, 100=54.94%, 250=44.31% 00:30:08.096 cpu : usr=37.44%, sys=1.53%, ctx=1044, majf=0, minf=9 00:30:08.096 IO depths : 1=3.0%, 2=6.8%, 4=17.9%, 8=62.6%, 16=9.7%, 32=0.0%, >=64=0.0% 00:30:08.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 complete : 0=0.0%, 4=92.1%, 8=2.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.096 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.096 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.096 filename2: (groupid=0, jobs=1): err= 0: pid=117361: Wed Jul 10 14:48:18 2024 00:30:08.096 read: IOPS=156, BW=627KiB/s (642kB/s)(6288KiB/10034msec) 00:30:08.096 slat (usec): min=4, max=11070, avg=55.38, stdev=441.03 00:30:08.096 clat (msec): min=44, max=191, avg=101.67, stdev=25.80 00:30:08.096 lat (msec): min=44, max=191, avg=101.72, stdev=25.82 00:30:08.096 clat percentiles (msec): 00:30:08.096 | 1.00th=[ 54], 5.00th=[ 67], 10.00th=[ 71], 20.00th=[ 79], 00:30:08.096 | 30.00th=[ 86], 40.00th=[ 94], 50.00th=[ 100], 60.00th=[ 106], 00:30:08.096 | 70.00th=[ 114], 80.00th=[ 123], 90.00th=[ 136], 95.00th=[ 150], 00:30:08.096 | 99.00th=[ 171], 99.50th=[ 174], 99.90th=[ 192], 99.95th=[ 192], 00:30:08.096 | 99.99th=[ 192] 00:30:08.096 bw ( KiB/s): min= 509, max= 816, per=3.74%, avg=622.20, stdev=77.61, samples=20 00:30:08.097 iops : min= 127, max= 204, avg=155.50, stdev=19.39, samples=20 00:30:08.097 lat (msec) : 50=0.38%, 100=51.34%, 250=48.28% 00:30:08.097 cpu : usr=38.06%, sys=1.64%, ctx=1309, majf=0, minf=9 00:30:08.097 IO depths : 1=3.0%, 2=7.0%, 4=18.4%, 8=61.9%, 16=9.7%, 32=0.0%, >=64=0.0% 00:30:08.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.097 complete : 0=0.0%, 4=92.3%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.097 issued rwts: total=1572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.097 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.097 filename2: (groupid=0, jobs=1): err= 0: pid=117362: Wed Jul 10 14:48:18 2024 00:30:08.097 read: IOPS=180, BW=724KiB/s (741kB/s)(7264KiB/10037msec) 00:30:08.097 slat (usec): min=8, max=11050, avg=53.65, stdev=428.77 00:30:08.097 clat (msec): min=35, max=187, avg=88.18, stdev=23.67 00:30:08.097 lat (msec): min=35, max=187, avg=88.23, stdev=23.68 00:30:08.097 clat percentiles (msec): 00:30:08.097 | 1.00th=[ 47], 5.00th=[ 55], 10.00th=[ 60], 20.00th=[ 67], 00:30:08.097 | 30.00th=[ 75], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 92], 00:30:08.097 | 70.00th=[ 101], 80.00th=[ 109], 90.00th=[ 122], 95.00th=[ 131], 00:30:08.097 | 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 188], 99.95th=[ 188], 00:30:08.097 | 99.99th=[ 188] 00:30:08.097 bw ( KiB/s): min= 512, max= 936, per=4.33%, avg=719.40, stdev=120.23, samples=20 00:30:08.097 iops : min= 128, max= 234, avg=179.85, stdev=30.06, samples=20 00:30:08.097 lat (msec) : 50=2.42%, 100=68.01%, 250=29.57% 00:30:08.097 cpu : usr=38.64%, sys=1.68%, ctx=1302, majf=0, minf=9 00:30:08.097 IO depths : 1=1.3%, 2=2.8%, 4=10.6%, 8=73.1%, 16=12.1%, 32=0.0%, >=64=0.0% 00:30:08.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.097 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.097 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.097 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.097 filename2: (groupid=0, jobs=1): err= 0: pid=117363: Wed Jul 10 14:48:18 2024 00:30:08.097 read: IOPS=156, BW=624KiB/s (639kB/s)(6260KiB/10028msec) 00:30:08.097 slat (usec): min=4, max=8049, avg=19.23, stdev=203.24 00:30:08.097 clat (msec): min=46, max=185, avg=102.34, stdev=28.30 00:30:08.097 lat (msec): min=46, max=185, avg=102.35, stdev=28.30 00:30:08.097 clat percentiles (msec): 00:30:08.097 | 1.00th=[ 49], 5.00th=[ 60], 10.00th=[ 69], 20.00th=[ 82], 00:30:08.097 | 30.00th=[ 85], 40.00th=[ 95], 50.00th=[ 100], 60.00th=[ 108], 00:30:08.097 | 70.00th=[ 117], 80.00th=[ 130], 90.00th=[ 144], 95.00th=[ 157], 00:30:08.097 | 99.00th=[ 186], 99.50th=[ 186], 99.90th=[ 186], 99.95th=[ 186], 00:30:08.097 | 99.99th=[ 186] 00:30:08.097 bw ( KiB/s): min= 472, max= 896, per=3.72%, avg=619.35, stdev=115.71, samples=20 00:30:08.097 iops : min= 118, max= 224, avg=154.80, stdev=28.94, samples=20 00:30:08.097 lat (msec) : 50=2.11%, 100=49.65%, 250=48.24% 00:30:08.097 cpu : usr=31.71%, sys=1.12%, ctx=875, majf=0, minf=9 00:30:08.097 IO depths : 1=2.8%, 2=6.1%, 4=16.0%, 8=65.0%, 16=10.1%, 32=0.0%, >=64=0.0% 00:30:08.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.097 complete : 0=0.0%, 4=91.6%, 8=3.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.097 issued rwts: total=1565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.097 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:08.097 00:30:08.097 Run status group 0 (all jobs): 00:30:08.097 READ: bw=16.2MiB/s (17.0MB/s), 617KiB/s-815KiB/s (632kB/s-835kB/s), io=164MiB (172MB), run=10023-10082msec 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 bdev_null0 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 [2024-07-10 14:48:19.168254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 bdev_null1 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:08.097 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.098 { 00:30:08.098 "params": { 00:30:08.098 "name": "Nvme$subsystem", 00:30:08.098 "trtype": "$TEST_TRANSPORT", 00:30:08.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.098 "adrfam": "ipv4", 00:30:08.098 "trsvcid": "$NVMF_PORT", 00:30:08.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.098 "hdgst": ${hdgst:-false}, 00:30:08.098 "ddgst": ${ddgst:-false} 00:30:08.098 }, 00:30:08.098 "method": "bdev_nvme_attach_controller" 00:30:08.098 } 00:30:08.098 EOF 00:30:08.098 )") 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.098 { 00:30:08.098 "params": { 00:30:08.098 "name": "Nvme$subsystem", 00:30:08.098 "trtype": "$TEST_TRANSPORT", 00:30:08.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.098 "adrfam": "ipv4", 00:30:08.098 "trsvcid": "$NVMF_PORT", 00:30:08.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.098 "hdgst": ${hdgst:-false}, 00:30:08.098 "ddgst": ${ddgst:-false} 00:30:08.098 }, 00:30:08.098 "method": "bdev_nvme_attach_controller" 00:30:08.098 } 00:30:08.098 EOF 00:30:08.098 )") 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:08.098 "params": { 00:30:08.098 "name": "Nvme0", 00:30:08.098 "trtype": "tcp", 00:30:08.098 "traddr": "10.0.0.2", 00:30:08.098 "adrfam": "ipv4", 00:30:08.098 "trsvcid": "4420", 00:30:08.098 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.098 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:08.098 "hdgst": false, 00:30:08.098 "ddgst": false 00:30:08.098 }, 00:30:08.098 "method": "bdev_nvme_attach_controller" 00:30:08.098 },{ 00:30:08.098 "params": { 00:30:08.098 "name": "Nvme1", 00:30:08.098 "trtype": "tcp", 00:30:08.098 "traddr": "10.0.0.2", 00:30:08.098 "adrfam": "ipv4", 00:30:08.098 "trsvcid": "4420", 00:30:08.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:08.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:08.098 "hdgst": false, 00:30:08.098 "ddgst": false 00:30:08.098 }, 00:30:08.098 "method": "bdev_nvme_attach_controller" 00:30:08.098 }' 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:08.098 14:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:08.098 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:08.098 ... 00:30:08.098 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:08.098 ... 00:30:08.098 fio-3.35 00:30:08.098 Starting 4 threads 00:30:13.368 00:30:13.368 filename0: (groupid=0, jobs=1): err= 0: pid=117485: Wed Jul 10 14:48:25 2024 00:30:13.368 read: IOPS=1542, BW=12.1MiB/s (12.6MB/s)(60.3MiB/5001msec) 00:30:13.368 slat (nsec): min=6446, max=62654, avg=15514.09, stdev=8189.94 00:30:13.368 clat (usec): min=1205, max=9288, avg=5113.94, stdev=764.50 00:30:13.368 lat (usec): min=1213, max=9312, avg=5129.45, stdev=768.34 00:30:13.368 clat percentiles (usec): 00:30:13.368 | 1.00th=[ 4015], 5.00th=[ 4080], 10.00th=[ 4113], 20.00th=[ 4178], 00:30:13.368 | 30.00th=[ 4293], 40.00th=[ 5407], 50.00th=[ 5538], 60.00th=[ 5604], 00:30:13.368 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5800], 95.00th=[ 5932], 00:30:13.368 | 99.00th=[ 6194], 99.50th=[ 6259], 99.90th=[ 6980], 99.95th=[ 9241], 00:30:13.368 | 99.99th=[ 9241] 00:30:13.368 bw ( KiB/s): min=11008, max=15360, per=24.49%, avg=12071.11, stdev=1508.91, samples=9 00:30:13.368 iops : min= 1376, max= 1920, avg=1508.89, stdev=188.61, samples=9 00:30:13.368 lat (msec) : 2=0.18%, 4=0.57%, 10=99.25% 00:30:13.368 cpu : usr=93.04%, sys=5.42%, ctx=7, majf=0, minf=9 00:30:13.368 IO depths : 1=6.9%, 2=18.9%, 4=56.1%, 8=18.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.368 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.368 issued rwts: total=7716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.368 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:13.368 filename0: (groupid=0, jobs=1): err= 0: pid=117486: Wed Jul 10 14:48:25 2024 00:30:13.368 read: IOPS=1539, BW=12.0MiB/s (12.6MB/s)(60.1MiB/5001msec) 00:30:13.368 slat (nsec): min=5362, max=61522, avg=15757.83, stdev=5744.17 00:30:13.368 clat (usec): min=1411, max=9437, avg=5121.49, stdev=812.51 00:30:13.368 lat (usec): min=1422, max=9451, avg=5137.25, stdev=812.07 00:30:13.368 clat percentiles (usec): 00:30:13.368 | 1.00th=[ 3982], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4146], 00:30:13.368 | 30.00th=[ 4228], 40.00th=[ 5211], 50.00th=[ 5604], 60.00th=[ 5669], 00:30:13.368 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5800], 95.00th=[ 5997], 00:30:13.368 | 99.00th=[ 6456], 99.50th=[ 7570], 99.90th=[ 9110], 99.95th=[ 9241], 00:30:13.368 | 99.99th=[ 9503] 00:30:13.368 bw ( KiB/s): min=11008, max=15360, per=24.45%, avg=12051.56, stdev=1477.33, samples=9 00:30:13.368 iops : min= 1376, max= 1920, avg=1506.44, stdev=184.67, samples=9 00:30:13.368 lat (msec) : 2=0.04%, 4=1.19%, 10=98.77% 00:30:13.368 cpu : usr=93.14%, sys=5.32%, ctx=31, majf=0, minf=10 00:30:13.368 IO depths : 1=6.5%, 2=22.2%, 4=52.8%, 8=18.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.368 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.368 issued rwts: total=7699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.368 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:13.368 filename1: (groupid=0, jobs=1): err= 0: pid=117487: Wed Jul 10 14:48:25 2024 00:30:13.368 read: IOPS=1538, BW=12.0MiB/s (12.6MB/s)(60.1MiB/5001msec) 00:30:13.368 slat (usec): min=4, max=862, avg=18.84, stdev=11.41 00:30:13.368 clat (usec): min=2169, max=9469, avg=5104.48, stdev=774.21 00:30:13.368 lat (usec): min=2200, max=9484, avg=5123.32, stdev=775.03 00:30:13.368 clat percentiles (usec): 00:30:13.368 | 1.00th=[ 3982], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4146], 00:30:13.368 | 30.00th=[ 4228], 40.00th=[ 5407], 50.00th=[ 5538], 60.00th=[ 5604], 00:30:13.368 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5800], 95.00th=[ 5932], 00:30:13.369 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 8717], 99.95th=[ 8848], 00:30:13.369 | 99.99th=[ 9503] 00:30:13.369 bw ( KiB/s): min=11008, max=15360, per=24.44%, avg=12046.22, stdev=1480.94, samples=9 00:30:13.369 iops : min= 1376, max= 1920, avg=1505.78, stdev=185.12, samples=9 00:30:13.369 lat (msec) : 4=1.10%, 10=98.90% 00:30:13.369 cpu : usr=93.00%, sys=5.40%, ctx=12, majf=0, minf=9 00:30:13.369 IO depths : 1=10.0%, 2=25.0%, 4=50.0%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.369 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.369 issued rwts: total=7696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.369 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:13.369 filename1: (groupid=0, jobs=1): err= 0: pid=117488: Wed Jul 10 14:48:25 2024 00:30:13.369 read: IOPS=1542, BW=12.0MiB/s (12.6MB/s)(60.3MiB/5004msec) 00:30:13.369 slat (nsec): min=4647, max=85010, avg=17461.47, stdev=7155.92 00:30:13.369 clat (usec): min=1398, max=7056, avg=5097.75, stdev=756.51 00:30:13.369 lat (usec): min=1409, max=7073, avg=5115.22, stdev=757.56 00:30:13.369 clat percentiles (usec): 00:30:13.369 | 1.00th=[ 4015], 5.00th=[ 4080], 10.00th=[ 4113], 20.00th=[ 4146], 00:30:13.369 | 30.00th=[ 4228], 40.00th=[ 5342], 50.00th=[ 5538], 60.00th=[ 5604], 00:30:13.369 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5800], 95.00th=[ 5932], 00:30:13.369 | 99.00th=[ 6128], 99.50th=[ 6259], 99.90th=[ 6587], 99.95th=[ 6849], 00:30:13.369 | 99.99th=[ 7046] 00:30:13.369 bw ( KiB/s): min=11008, max=15360, per=25.03%, avg=12336.00, stdev=1622.82, samples=10 00:30:13.369 iops : min= 1376, max= 1920, avg=1542.00, stdev=202.85, samples=10 00:30:13.369 lat (msec) : 2=0.10%, 4=0.61%, 10=99.29% 00:30:13.369 cpu : usr=91.59%, sys=6.42%, ctx=9, majf=0, minf=9 00:30:13.369 IO depths : 1=10.4%, 2=24.5%, 4=50.4%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.369 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.369 issued rwts: total=7718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.369 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:13.369 00:30:13.369 Run status group 0 (all jobs): 00:30:13.369 READ: bw=48.1MiB/s (50.5MB/s), 12.0MiB/s-12.1MiB/s (12.6MB/s-12.6MB/s), io=241MiB (253MB), run=5001-5004msec 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:13.369 ************************************ 00:30:13.369 END TEST fio_dif_rand_params 00:30:13.369 ************************************ 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.369 00:30:13.369 real 0m23.307s 00:30:13.369 user 2m2.648s 00:30:13.369 sys 0m6.697s 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:13.369 14:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:13.369 14:48:25 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:13.369 14:48:25 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:13.369 14:48:25 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:13.369 14:48:25 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.369 14:48:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:13.369 ************************************ 00:30:13.369 START TEST fio_dif_digest 00:30:13.369 ************************************ 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:13.369 bdev_null0 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:13.369 [2024-07-10 14:48:25.323773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:13.369 { 00:30:13.369 "params": { 00:30:13.369 "name": "Nvme$subsystem", 00:30:13.369 "trtype": "$TEST_TRANSPORT", 00:30:13.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.369 "adrfam": "ipv4", 00:30:13.369 "trsvcid": "$NVMF_PORT", 00:30:13.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.369 "hdgst": ${hdgst:-false}, 00:30:13.369 "ddgst": ${ddgst:-false} 00:30:13.369 }, 00:30:13.369 "method": "bdev_nvme_attach_controller" 00:30:13.369 } 00:30:13.369 EOF 00:30:13.369 )") 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:13.369 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:13.370 "params": { 00:30:13.370 "name": "Nvme0", 00:30:13.370 "trtype": "tcp", 00:30:13.370 "traddr": "10.0.0.2", 00:30:13.370 "adrfam": "ipv4", 00:30:13.370 "trsvcid": "4420", 00:30:13.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.370 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:13.370 "hdgst": true, 00:30:13.370 "ddgst": true 00:30:13.370 }, 00:30:13.370 "method": "bdev_nvme_attach_controller" 00:30:13.370 }' 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:13.370 14:48:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:13.370 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:13.370 ... 00:30:13.370 fio-3.35 00:30:13.370 Starting 3 threads 00:30:25.580 00:30:25.580 filename0: (groupid=0, jobs=1): err= 0: pid=117589: Wed Jul 10 14:48:35 2024 00:30:25.580 read: IOPS=164, BW=20.6MiB/s (21.6MB/s)(206MiB/10003msec) 00:30:25.580 slat (nsec): min=4734, max=49691, avg=13892.85, stdev=3795.73 00:30:25.580 clat (usec): min=6472, max=22086, avg=18221.61, stdev=962.83 00:30:25.580 lat (usec): min=6485, max=22094, avg=18235.51, stdev=962.93 00:30:25.580 clat percentiles (usec): 00:30:25.580 | 1.00th=[16057], 5.00th=[16909], 10.00th=[17171], 20.00th=[17433], 00:30:25.580 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:30:25.580 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19268], 95.00th=[19792], 00:30:25.580 | 99.00th=[20841], 99.50th=[21365], 99.90th=[21627], 99.95th=[22152], 00:30:25.580 | 99.99th=[22152] 00:30:25.580 bw ( KiB/s): min=19968, max=22272, per=27.03%, avg=21032.42, stdev=469.44, samples=19 00:30:25.580 iops : min= 156, max= 174, avg=164.32, stdev= 3.67, samples=19 00:30:25.580 lat (msec) : 10=0.06%, 20=96.23%, 50=3.71% 00:30:25.580 cpu : usr=93.45%, sys=5.32%, ctx=7, majf=0, minf=0 00:30:25.580 IO depths : 1=6.4%, 2=93.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:25.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.580 issued rwts: total=1645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.580 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:25.580 filename0: (groupid=0, jobs=1): err= 0: pid=117590: Wed Jul 10 14:48:35 2024 00:30:25.580 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(262MiB/10002msec) 00:30:25.580 slat (nsec): min=7890, max=58856, avg=13179.13, stdev=3770.52 00:30:25.580 clat (usec): min=10376, max=19006, avg=14281.86, stdev=1041.90 00:30:25.580 lat (usec): min=10385, max=19020, avg=14295.04, stdev=1041.58 00:30:25.580 clat percentiles (usec): 00:30:25.580 | 1.00th=[11863], 5.00th=[12518], 10.00th=[12911], 20.00th=[13435], 00:30:25.580 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:30:25.580 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[16057], 00:30:25.580 | 99.00th=[16909], 99.50th=[17171], 99.90th=[18744], 99.95th=[19006], 00:30:25.580 | 99.99th=[19006] 00:30:25.580 bw ( KiB/s): min=24832, max=27904, per=34.62%, avg=26933.89, stdev=726.98, samples=19 00:30:25.580 iops : min= 194, max= 218, avg=210.42, stdev= 5.68, samples=19 00:30:25.580 lat (msec) : 20=100.00% 00:30:25.580 cpu : usr=92.44%, sys=6.22%, ctx=13, majf=0, minf=0 00:30:25.580 IO depths : 1=6.0%, 2=94.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:25.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.580 issued rwts: total=2098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.580 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:25.580 filename0: (groupid=0, jobs=1): err= 0: pid=117591: Wed Jul 10 14:48:35 2024 00:30:25.580 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(292MiB/10005msec) 00:30:25.580 slat (nsec): min=6948, max=62827, avg=14265.42, stdev=3635.06 00:30:25.580 clat (usec): min=9923, max=19408, avg=12819.67, stdev=866.35 00:30:25.580 lat (usec): min=9940, max=19426, avg=12833.94, stdev=866.72 00:30:25.580 clat percentiles (usec): 00:30:25.580 | 1.00th=[10814], 5.00th=[11469], 10.00th=[11731], 20.00th=[12256], 00:30:25.580 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:30:25.580 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[14222], 00:30:25.580 | 99.00th=[15270], 99.50th=[15926], 99.90th=[18744], 99.95th=[19268], 00:30:25.580 | 99.99th=[19530] 00:30:25.580 bw ( KiB/s): min=27648, max=30976, per=38.51%, avg=29959.00, stdev=832.46, samples=19 00:30:25.580 iops : min= 216, max= 242, avg=234.05, stdev= 6.50, samples=19 00:30:25.580 lat (msec) : 10=0.04%, 20=99.96% 00:30:25.580 cpu : usr=92.41%, sys=6.10%, ctx=15, majf=0, minf=0 00:30:25.580 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:25.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.580 issued rwts: total=2338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.580 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:25.580 00:30:25.580 Run status group 0 (all jobs): 00:30:25.580 READ: bw=76.0MiB/s (79.7MB/s), 20.6MiB/s-29.2MiB/s (21.6MB/s-30.6MB/s), io=760MiB (797MB), run=10002-10005msec 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:25.580 ************************************ 00:30:25.580 END TEST fio_dif_digest 00:30:25.580 ************************************ 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.580 00:30:25.580 real 0m10.842s 00:30:25.580 user 0m28.390s 00:30:25.580 sys 0m1.978s 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:25.580 14:48:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:25.580 14:48:36 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:25.580 14:48:36 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:25.580 14:48:36 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:25.580 14:48:36 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:25.580 14:48:36 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:25.580 14:48:36 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:25.580 14:48:36 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:25.580 14:48:36 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:25.580 14:48:36 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:25.580 rmmod nvme_tcp 00:30:25.580 rmmod nvme_fabrics 00:30:25.580 rmmod nvme_keyring 00:30:25.580 14:48:36 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:25.580 14:48:36 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:25.580 14:48:36 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:25.580 14:48:36 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 116882 ']' 00:30:25.580 14:48:36 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 116882 00:30:25.580 14:48:36 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 116882 ']' 00:30:25.580 14:48:36 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 116882 00:30:25.580 14:48:36 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:30:25.580 14:48:36 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:25.580 14:48:36 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116882 00:30:25.580 killing process with pid 116882 00:30:25.580 14:48:36 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:25.580 14:48:36 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:25.581 14:48:36 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116882' 00:30:25.581 14:48:36 nvmf_dif -- common/autotest_common.sh@967 -- # kill 116882 00:30:25.581 14:48:36 nvmf_dif -- common/autotest_common.sh@972 -- # wait 116882 00:30:25.581 14:48:36 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:25.581 14:48:36 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:25.581 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:25.581 Waiting for block devices as requested 00:30:25.581 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:25.581 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:25.581 14:48:36 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:25.581 14:48:36 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:25.581 14:48:36 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:25.581 14:48:36 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:25.581 14:48:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.581 14:48:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:25.581 14:48:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.581 14:48:37 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:25.581 00:30:25.581 real 0m58.371s 00:30:25.581 user 3m45.144s 00:30:25.581 sys 0m17.011s 00:30:25.581 ************************************ 00:30:25.581 END TEST nvmf_dif 00:30:25.581 ************************************ 00:30:25.581 14:48:37 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:25.581 14:48:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:25.581 14:48:37 -- common/autotest_common.sh@1142 -- # return 0 00:30:25.581 14:48:37 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:25.581 14:48:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:25.581 14:48:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:25.581 14:48:37 -- common/autotest_common.sh@10 -- # set +x 00:30:25.581 ************************************ 00:30:25.581 START TEST nvmf_abort_qd_sizes 00:30:25.581 ************************************ 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:25.581 * Looking for test storage... 00:30:25.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:25.581 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:25.581 Cannot find device "nvmf_tgt_br" 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:25.582 Cannot find device "nvmf_tgt_br2" 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:25.582 Cannot find device "nvmf_tgt_br" 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:25.582 Cannot find device "nvmf_tgt_br2" 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:25.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:25.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:25.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:30:25.582 00:30:25.582 --- 10.0.0.2 ping statistics --- 00:30:25.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.582 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:25.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:25.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:30:25.582 00:30:25.582 --- 10.0.0.3 ping statistics --- 00:30:25.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.582 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:25.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:30:25.582 00:30:25.582 --- 10.0.0.1 ping statistics --- 00:30:25.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.582 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:25.582 14:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:25.841 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:26.099 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:26.099 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=118171 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 118171 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 118171 ']' 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:26.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:26.099 14:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:26.358 [2024-07-10 14:48:38.426037] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:30:26.358 [2024-07-10 14:48:38.426152] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.358 [2024-07-10 14:48:38.552100] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:26.358 [2024-07-10 14:48:38.573119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:26.358 [2024-07-10 14:48:38.616732] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.358 [2024-07-10 14:48:38.616788] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.358 [2024-07-10 14:48:38.616802] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.358 [2024-07-10 14:48:38.616824] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.358 [2024-07-10 14:48:38.616834] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.358 [2024-07-10 14:48:38.616965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.358 [2024-07-10 14:48:38.617065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.358 [2024-07-10 14:48:38.617531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:26.358 [2024-07-10 14:48:38.617541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:27.292 14:48:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:27.292 ************************************ 00:30:27.292 START TEST spdk_target_abort 00:30:27.292 ************************************ 00:30:27.292 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:30:27.292 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:27.292 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:27.292 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.292 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:27.292 spdk_targetn1 00:30:27.292 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.292 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:27.292 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.292 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:27.292 [2024-07-10 14:48:39.563055] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.292 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.292 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:27.292 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.292 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:27.550 [2024-07-10 14:48:39.599201] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:27.550 14:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:30.836 Initializing NVMe Controllers 00:30:30.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:30.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:30.836 Initialization complete. Launching workers. 00:30:30.836 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11530, failed: 0 00:30:30.836 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1098, failed to submit 10432 00:30:30.836 success 706, unsuccess 392, failed 0 00:30:30.836 14:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:30.836 14:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:34.126 Initializing NVMe Controllers 00:30:34.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:34.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:34.126 Initialization complete. Launching workers. 00:30:34.126 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5956, failed: 0 00:30:34.126 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1260, failed to submit 4696 00:30:34.126 success 256, unsuccess 1004, failed 0 00:30:34.126 14:48:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:34.126 14:48:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:37.412 Initializing NVMe Controllers 00:30:37.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:37.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:37.412 Initialization complete. Launching workers. 00:30:37.412 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29663, failed: 0 00:30:37.412 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2597, failed to submit 27066 00:30:37.412 success 391, unsuccess 2206, failed 0 00:30:37.413 14:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:37.413 14:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.413 14:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:37.413 14:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.413 14:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:37.413 14:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.413 14:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 118171 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 118171 ']' 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 118171 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118171 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:37.980 killing process with pid 118171 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118171' 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 118171 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 118171 00:30:37.980 00:30:37.980 real 0m10.736s 00:30:37.980 user 0m43.984s 00:30:37.980 sys 0m1.714s 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:37.980 ************************************ 00:30:37.980 END TEST spdk_target_abort 00:30:37.980 ************************************ 00:30:37.980 14:48:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:37.980 14:48:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:37.980 14:48:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:37.980 14:48:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:37.980 14:48:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:37.980 ************************************ 00:30:37.980 START TEST kernel_target_abort 00:30:37.980 ************************************ 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:37.980 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:38.237 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:38.237 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:38.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:38.494 Waiting for block devices as requested 00:30:38.494 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:38.494 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:38.752 No valid GPT data, bailing 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:38.752 No valid GPT data, bailing 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:30:38.752 14:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:38.752 No valid GPT data, bailing 00:30:38.752 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:39.010 No valid GPT data, bailing 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:39.010 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 --hostid=29002397-6866-4d44-9964-2c83ec2680a9 -a 10.0.0.1 -t tcp -s 4420 00:30:39.010 00:30:39.010 Discovery Log Number of Records 2, Generation counter 2 00:30:39.010 =====Discovery Log Entry 0====== 00:30:39.010 trtype: tcp 00:30:39.010 adrfam: ipv4 00:30:39.010 subtype: current discovery subsystem 00:30:39.010 treq: not specified, sq flow control disable supported 00:30:39.010 portid: 1 00:30:39.010 trsvcid: 4420 00:30:39.010 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:39.010 traddr: 10.0.0.1 00:30:39.010 eflags: none 00:30:39.010 sectype: none 00:30:39.010 =====Discovery Log Entry 1====== 00:30:39.010 trtype: tcp 00:30:39.010 adrfam: ipv4 00:30:39.010 subtype: nvme subsystem 00:30:39.010 treq: not specified, sq flow control disable supported 00:30:39.010 portid: 1 00:30:39.010 trsvcid: 4420 00:30:39.010 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:39.010 traddr: 10.0.0.1 00:30:39.010 eflags: none 00:30:39.010 sectype: none 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:39.011 14:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:42.309 Initializing NVMe Controllers 00:30:42.309 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:42.309 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:42.309 Initialization complete. Launching workers. 00:30:42.309 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34375, failed: 0 00:30:42.309 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34375, failed to submit 0 00:30:42.309 success 0, unsuccess 34375, failed 0 00:30:42.309 14:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:42.309 14:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:45.591 Initializing NVMe Controllers 00:30:45.591 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:45.591 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:45.591 Initialization complete. Launching workers. 00:30:45.591 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65618, failed: 0 00:30:45.591 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28385, failed to submit 37233 00:30:45.591 success 0, unsuccess 28385, failed 0 00:30:45.591 14:48:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:45.591 14:48:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:48.875 Initializing NVMe Controllers 00:30:48.875 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:48.876 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:48.876 Initialization complete. Launching workers. 00:30:48.876 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72432, failed: 0 00:30:48.876 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18124, failed to submit 54308 00:30:48.876 success 0, unsuccess 18124, failed 0 00:30:48.876 14:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:48.876 14:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:48.876 14:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:48.876 14:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:48.876 14:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:48.876 14:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:48.876 14:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:48.876 14:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:48.876 14:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:48.876 14:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:49.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:50.070 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:50.070 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:50.070 00:30:50.070 real 0m12.030s 00:30:50.070 user 0m6.200s 00:30:50.070 sys 0m3.236s 00:30:50.070 14:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:50.070 14:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:50.070 ************************************ 00:30:50.070 END TEST kernel_target_abort 00:30:50.070 ************************************ 00:30:50.070 14:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:50.070 14:49:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:50.070 14:49:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:50.070 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:50.070 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:50.328 rmmod nvme_tcp 00:30:50.328 rmmod nvme_fabrics 00:30:50.328 rmmod nvme_keyring 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 118171 ']' 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 118171 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 118171 ']' 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 118171 00:30:50.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (118171) - No such process 00:30:50.328 Process with pid 118171 is not found 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 118171 is not found' 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:50.328 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:50.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:50.585 Waiting for block devices as requested 00:30:50.585 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:50.846 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:50.846 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:50.846 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:50.846 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:50.846 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:50.846 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.846 14:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:50.846 14:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.846 14:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:50.846 00:30:50.846 real 0m25.919s 00:30:50.846 user 0m51.313s 00:30:50.846 sys 0m6.260s 00:30:50.846 14:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:50.846 14:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:50.846 ************************************ 00:30:50.846 END TEST nvmf_abort_qd_sizes 00:30:50.846 ************************************ 00:30:50.846 14:49:03 -- common/autotest_common.sh@1142 -- # return 0 00:30:50.846 14:49:03 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:50.846 14:49:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:50.846 14:49:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:50.846 14:49:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.846 ************************************ 00:30:50.846 START TEST keyring_file 00:30:50.846 ************************************ 00:30:50.846 14:49:03 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:50.846 * Looking for test storage... 00:30:50.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:30:50.846 14:49:03 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:30:50.846 14:49:03 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:50.846 14:49:03 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:50.846 14:49:03 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.846 14:49:03 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.846 14:49:03 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.846 14:49:03 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.846 14:49:03 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.846 14:49:03 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.846 14:49:03 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:50.847 14:49:03 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.847 14:49:03 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.847 14:49:03 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.847 14:49:03 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.847 14:49:03 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.847 14:49:03 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.847 14:49:03 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:50.847 14:49:03 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:50.847 14:49:03 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:50.847 14:49:03 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:50.847 14:49:03 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:50.847 14:49:03 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:50.847 14:49:03 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:50.847 14:49:03 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:50.847 14:49:03 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:50.847 14:49:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:50.847 14:49:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:50.847 14:49:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:50.847 14:49:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:50.847 14:49:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:50.847 14:49:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XKn6wqhk2z 00:30:50.847 14:49:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:50.847 14:49:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:51.107 14:49:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XKn6wqhk2z 00:30:51.107 14:49:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XKn6wqhk2z 00:30:51.107 14:49:03 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.XKn6wqhk2z 00:30:51.107 14:49:03 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:51.107 14:49:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:51.107 14:49:03 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:51.107 14:49:03 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:51.107 14:49:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:51.107 14:49:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:51.107 14:49:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ieIaBxj1sj 00:30:51.107 14:49:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:51.107 14:49:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:51.107 14:49:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:51.107 14:49:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:51.107 14:49:03 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:51.107 14:49:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:51.107 14:49:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:51.107 14:49:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ieIaBxj1sj 00:30:51.107 14:49:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ieIaBxj1sj 00:30:51.107 14:49:03 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ieIaBxj1sj 00:30:51.107 14:49:03 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.107 14:49:03 keyring_file -- keyring/file.sh@30 -- # tgtpid=119040 00:30:51.107 14:49:03 keyring_file -- keyring/file.sh@32 -- # waitforlisten 119040 00:30:51.107 14:49:03 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 119040 ']' 00:30:51.107 14:49:03 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.107 14:49:03 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:51.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.107 14:49:03 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.107 14:49:03 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:51.107 14:49:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:51.107 [2024-07-10 14:49:03.299155] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:30:51.107 [2024-07-10 14:49:03.299245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119040 ] 00:30:51.365 [2024-07-10 14:49:03.417508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:51.365 [2024-07-10 14:49:03.433846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.365 [2024-07-10 14:49:03.469959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.301 14:49:04 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:52.301 14:49:04 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:52.301 14:49:04 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:52.301 14:49:04 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.301 14:49:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:52.301 [2024-07-10 14:49:04.242427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.301 null0 00:30:52.301 [2024-07-10 14:49:04.274390] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:52.301 [2024-07-10 14:49:04.274614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:52.302 [2024-07-10 14:49:04.282384] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.302 14:49:04 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:52.302 [2024-07-10 14:49:04.294382] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:52.302 2024/07/10 14:49:04 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:30:52.302 request: 00:30:52.302 { 00:30:52.302 "method": "nvmf_subsystem_add_listener", 00:30:52.302 "params": { 00:30:52.302 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.302 "secure_channel": false, 00:30:52.302 "listen_address": { 00:30:52.302 "trtype": "tcp", 00:30:52.302 "traddr": "127.0.0.1", 00:30:52.302 "trsvcid": "4420" 00:30:52.302 } 00:30:52.302 } 00:30:52.302 } 00:30:52.302 Got JSON-RPC error response 00:30:52.302 GoRPCClient: error on JSON-RPC call 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:52.302 14:49:04 keyring_file -- keyring/file.sh@46 -- # bperfpid=119074 00:30:52.302 14:49:04 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:52.302 14:49:04 keyring_file -- keyring/file.sh@48 -- # waitforlisten 119074 /var/tmp/bperf.sock 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 119074 ']' 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:52.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:52.302 14:49:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:52.302 [2024-07-10 14:49:04.357801] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:30:52.302 [2024-07-10 14:49:04.357903] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119074 ] 00:30:52.302 [2024-07-10 14:49:04.478429] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:52.302 [2024-07-10 14:49:04.496603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.302 [2024-07-10 14:49:04.548824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.561 14:49:04 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:52.561 14:49:04 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:52.561 14:49:04 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XKn6wqhk2z 00:30:52.561 14:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XKn6wqhk2z 00:30:52.820 14:49:04 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ieIaBxj1sj 00:30:52.820 14:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ieIaBxj1sj 00:30:53.079 14:49:05 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:53.079 14:49:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.079 14:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.079 14:49:05 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:53.079 14:49:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:53.337 14:49:05 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.XKn6wqhk2z == \/\t\m\p\/\t\m\p\.\X\K\n\6\w\q\h\k\2\z ]] 00:30:53.337 14:49:05 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:53.337 14:49:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:53.337 14:49:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:53.337 14:49:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.337 14:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.596 14:49:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ieIaBxj1sj == \/\t\m\p\/\t\m\p\.\i\e\I\a\B\x\j\1\s\j ]] 00:30:53.596 14:49:05 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:53.596 14:49:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:53.596 14:49:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:53.597 14:49:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.597 14:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.597 14:49:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:53.856 14:49:06 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:53.856 14:49:06 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:53.856 14:49:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:53.856 14:49:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:53.856 14:49:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:53.856 14:49:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.856 14:49:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.114 14:49:06 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:54.114 14:49:06 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:54.114 14:49:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:54.373 [2024-07-10 14:49:06.626643] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:54.632 nvme0n1 00:30:54.632 14:49:06 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:54.632 14:49:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:54.632 14:49:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:54.632 14:49:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:54.632 14:49:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.632 14:49:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:54.891 14:49:07 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:54.891 14:49:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:54.891 14:49:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:54.891 14:49:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:54.891 14:49:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:54.891 14:49:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:54.891 14:49:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:55.149 14:49:07 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:55.149 14:49:07 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:55.149 Running I/O for 1 seconds... 00:30:56.528 00:30:56.529 Latency(us) 00:30:56.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.529 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:56.529 nvme0n1 : 1.01 11030.92 43.09 0.00 0.00 11561.97 6285.50 19422.49 00:30:56.529 =================================================================================================================== 00:30:56.529 Total : 11030.92 43.09 0.00 0.00 11561.97 6285.50 19422.49 00:30:56.529 0 00:30:56.529 14:49:08 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:56.529 14:49:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:56.529 14:49:08 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:56.529 14:49:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:56.529 14:49:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:56.529 14:49:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:56.529 14:49:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:56.529 14:49:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:56.788 14:49:09 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:56.788 14:49:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:56.788 14:49:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:56.788 14:49:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:56.788 14:49:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:56.788 14:49:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:56.788 14:49:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.355 14:49:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:57.355 14:49:09 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:57.355 14:49:09 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:57.355 14:49:09 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:57.355 14:49:09 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:57.355 14:49:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:57.355 14:49:09 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:57.355 14:49:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:57.355 14:49:09 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:57.355 14:49:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:57.355 [2024-07-10 14:49:09.639554] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:57.355 [2024-07-10 14:49:09.640373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d101f0 (107): Transport endpoint is not connected 00:30:57.355 [2024-07-10 14:49:09.641363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d101f0 (9): Bad file descriptor 00:30:57.355 [2024-07-10 14:49:09.642361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:57.355 [2024-07-10 14:49:09.642394] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:57.355 [2024-07-10 14:49:09.642405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:57.614 2024/07/10 14:49:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:30:57.614 request: 00:30:57.614 { 00:30:57.614 "method": "bdev_nvme_attach_controller", 00:30:57.614 "params": { 00:30:57.614 "name": "nvme0", 00:30:57.614 "trtype": "tcp", 00:30:57.614 "traddr": "127.0.0.1", 00:30:57.614 "adrfam": "ipv4", 00:30:57.614 "trsvcid": "4420", 00:30:57.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:57.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:57.614 "prchk_reftag": false, 00:30:57.614 "prchk_guard": false, 00:30:57.614 "hdgst": false, 00:30:57.614 "ddgst": false, 00:30:57.614 "psk": "key1" 00:30:57.614 } 00:30:57.614 } 00:30:57.614 Got JSON-RPC error response 00:30:57.614 GoRPCClient: error on JSON-RPC call 00:30:57.614 14:49:09 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:57.614 14:49:09 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:57.614 14:49:09 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:57.614 14:49:09 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:57.614 14:49:09 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:57.614 14:49:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:57.614 14:49:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:57.614 14:49:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.614 14:49:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.614 14:49:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:57.872 14:49:09 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:57.872 14:49:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:57.872 14:49:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:57.872 14:49:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:57.872 14:49:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.872 14:49:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:57.872 14:49:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.131 14:49:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:58.131 14:49:10 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:58.131 14:49:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:58.389 14:49:10 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:58.390 14:49:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:58.648 14:49:10 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:58.649 14:49:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:58.649 14:49:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.907 14:49:11 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:58.907 14:49:11 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.XKn6wqhk2z 00:30:58.907 14:49:11 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.XKn6wqhk2z 00:30:58.907 14:49:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:58.907 14:49:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.XKn6wqhk2z 00:30:58.907 14:49:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:58.907 14:49:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:58.907 14:49:11 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:58.907 14:49:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:58.907 14:49:11 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XKn6wqhk2z 00:30:58.907 14:49:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XKn6wqhk2z 00:30:59.166 [2024-07-10 14:49:11.388759] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XKn6wqhk2z': 0100660 00:30:59.166 [2024-07-10 14:49:11.388821] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:59.166 2024/07/10 14:49:11 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.XKn6wqhk2z], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:30:59.166 request: 00:30:59.166 { 00:30:59.166 "method": "keyring_file_add_key", 00:30:59.166 "params": { 00:30:59.166 "name": "key0", 00:30:59.166 "path": "/tmp/tmp.XKn6wqhk2z" 00:30:59.166 } 00:30:59.166 } 00:30:59.166 Got JSON-RPC error response 00:30:59.166 GoRPCClient: error on JSON-RPC call 00:30:59.166 14:49:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:59.166 14:49:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:59.166 14:49:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:59.166 14:49:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:59.166 14:49:11 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.XKn6wqhk2z 00:30:59.166 14:49:11 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XKn6wqhk2z 00:30:59.166 14:49:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XKn6wqhk2z 00:30:59.424 14:49:11 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.XKn6wqhk2z 00:30:59.424 14:49:11 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:59.424 14:49:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:59.424 14:49:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:59.424 14:49:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:59.424 14:49:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:59.424 14:49:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:59.683 14:49:11 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:59.683 14:49:11 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.683 14:49:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:59.683 14:49:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.683 14:49:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:59.683 14:49:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.683 14:49:11 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:59.683 14:49:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.683 14:49:11 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.683 14:49:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.942 [2024-07-10 14:49:12.172931] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.XKn6wqhk2z': No such file or directory 00:30:59.942 [2024-07-10 14:49:12.172980] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:59.942 [2024-07-10 14:49:12.173010] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:59.942 [2024-07-10 14:49:12.173019] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:59.942 [2024-07-10 14:49:12.173028] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:59.942 2024/07/10 14:49:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:30:59.942 request: 00:30:59.942 { 00:30:59.942 "method": "bdev_nvme_attach_controller", 00:30:59.942 "params": { 00:30:59.942 "name": "nvme0", 00:30:59.942 "trtype": "tcp", 00:30:59.942 "traddr": "127.0.0.1", 00:30:59.942 "adrfam": "ipv4", 00:30:59.942 "trsvcid": "4420", 00:30:59.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:59.942 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:59.942 "prchk_reftag": false, 00:30:59.942 "prchk_guard": false, 00:30:59.942 "hdgst": false, 00:30:59.942 "ddgst": false, 00:30:59.942 "psk": "key0" 00:30:59.942 } 00:30:59.942 } 00:30:59.942 Got JSON-RPC error response 00:30:59.942 GoRPCClient: error on JSON-RPC call 00:30:59.942 14:49:12 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:59.942 14:49:12 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:59.942 14:49:12 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:59.942 14:49:12 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:59.942 14:49:12 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:59.942 14:49:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:00.200 14:49:12 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:00.200 14:49:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:00.200 14:49:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:00.200 14:49:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:00.200 14:49:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:00.200 14:49:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:00.200 14:49:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ggZKZvQJye 00:31:00.200 14:49:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:00.200 14:49:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:00.200 14:49:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:00.201 14:49:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:00.201 14:49:12 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:00.201 14:49:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:00.201 14:49:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:00.201 14:49:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ggZKZvQJye 00:31:00.459 14:49:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ggZKZvQJye 00:31:00.459 14:49:12 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.ggZKZvQJye 00:31:00.459 14:49:12 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ggZKZvQJye 00:31:00.459 14:49:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ggZKZvQJye 00:31:00.718 14:49:12 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:00.718 14:49:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:00.976 nvme0n1 00:31:00.976 14:49:13 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:00.976 14:49:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:00.976 14:49:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:00.976 14:49:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:00.976 14:49:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.976 14:49:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.234 14:49:13 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:01.234 14:49:13 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:01.234 14:49:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:01.801 14:49:13 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:01.801 14:49:13 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:01.801 14:49:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:01.801 14:49:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.801 14:49:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:02.059 14:49:14 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:02.060 14:49:14 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:02.060 14:49:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:02.060 14:49:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:02.060 14:49:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:02.060 14:49:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:02.060 14:49:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:02.318 14:49:14 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:02.318 14:49:14 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:02.318 14:49:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:02.577 14:49:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:02.577 14:49:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:02.577 14:49:14 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:02.835 14:49:15 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:02.835 14:49:15 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ggZKZvQJye 00:31:02.835 14:49:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ggZKZvQJye 00:31:03.443 14:49:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ieIaBxj1sj 00:31:03.443 14:49:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ieIaBxj1sj 00:31:03.443 14:49:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:03.443 14:49:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:04.009 nvme0n1 00:31:04.009 14:49:16 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:04.009 14:49:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:04.269 14:49:16 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:04.269 "subsystems": [ 00:31:04.269 { 00:31:04.269 "subsystem": "keyring", 00:31:04.269 "config": [ 00:31:04.269 { 00:31:04.269 "method": "keyring_file_add_key", 00:31:04.269 "params": { 00:31:04.269 "name": "key0", 00:31:04.269 "path": "/tmp/tmp.ggZKZvQJye" 00:31:04.269 } 00:31:04.269 }, 00:31:04.269 { 00:31:04.269 "method": "keyring_file_add_key", 00:31:04.269 "params": { 00:31:04.269 "name": "key1", 00:31:04.269 "path": "/tmp/tmp.ieIaBxj1sj" 00:31:04.269 } 00:31:04.269 } 00:31:04.269 ] 00:31:04.269 }, 00:31:04.269 { 00:31:04.269 "subsystem": "iobuf", 00:31:04.269 "config": [ 00:31:04.269 { 00:31:04.269 "method": "iobuf_set_options", 00:31:04.269 "params": { 00:31:04.269 "large_bufsize": 135168, 00:31:04.269 "large_pool_count": 1024, 00:31:04.269 "small_bufsize": 8192, 00:31:04.269 "small_pool_count": 8192 00:31:04.269 } 00:31:04.269 } 00:31:04.269 ] 00:31:04.269 }, 00:31:04.269 { 00:31:04.269 "subsystem": "sock", 00:31:04.269 "config": [ 00:31:04.269 { 00:31:04.269 "method": "sock_set_default_impl", 00:31:04.269 "params": { 00:31:04.269 "impl_name": "posix" 00:31:04.269 } 00:31:04.269 }, 00:31:04.269 { 00:31:04.269 "method": "sock_impl_set_options", 00:31:04.269 "params": { 00:31:04.269 "enable_ktls": false, 00:31:04.269 "enable_placement_id": 0, 00:31:04.269 "enable_quickack": false, 00:31:04.269 "enable_recv_pipe": true, 00:31:04.269 "enable_zerocopy_send_client": false, 00:31:04.269 "enable_zerocopy_send_server": true, 00:31:04.269 "impl_name": "ssl", 00:31:04.269 "recv_buf_size": 4096, 00:31:04.269 "send_buf_size": 4096, 00:31:04.269 "tls_version": 0, 00:31:04.269 "zerocopy_threshold": 0 00:31:04.269 } 00:31:04.269 }, 00:31:04.269 { 00:31:04.269 "method": "sock_impl_set_options", 00:31:04.269 "params": { 00:31:04.269 "enable_ktls": false, 00:31:04.269 "enable_placement_id": 0, 00:31:04.269 "enable_quickack": false, 00:31:04.269 "enable_recv_pipe": true, 00:31:04.269 "enable_zerocopy_send_client": false, 00:31:04.269 "enable_zerocopy_send_server": true, 00:31:04.269 "impl_name": "posix", 00:31:04.269 "recv_buf_size": 2097152, 00:31:04.269 "send_buf_size": 2097152, 00:31:04.269 "tls_version": 0, 00:31:04.269 "zerocopy_threshold": 0 00:31:04.269 } 00:31:04.269 } 00:31:04.269 ] 00:31:04.269 }, 00:31:04.269 { 00:31:04.269 "subsystem": "vmd", 00:31:04.269 "config": [] 00:31:04.269 }, 00:31:04.269 { 00:31:04.269 "subsystem": "accel", 00:31:04.269 "config": [ 00:31:04.269 { 00:31:04.269 "method": "accel_set_options", 00:31:04.269 "params": { 00:31:04.269 "buf_count": 2048, 00:31:04.269 "large_cache_size": 16, 00:31:04.269 "sequence_count": 2048, 00:31:04.269 "small_cache_size": 128, 00:31:04.269 "task_count": 2048 00:31:04.269 } 00:31:04.269 } 00:31:04.269 ] 00:31:04.269 }, 00:31:04.269 { 00:31:04.269 "subsystem": "bdev", 00:31:04.269 "config": [ 00:31:04.269 { 00:31:04.269 "method": "bdev_set_options", 00:31:04.269 "params": { 00:31:04.269 "bdev_auto_examine": true, 00:31:04.269 "bdev_io_cache_size": 256, 00:31:04.269 "bdev_io_pool_size": 65535, 00:31:04.269 "iobuf_large_cache_size": 16, 00:31:04.269 "iobuf_small_cache_size": 128 00:31:04.269 } 00:31:04.269 }, 00:31:04.269 { 00:31:04.269 "method": "bdev_raid_set_options", 00:31:04.269 "params": { 00:31:04.269 "process_window_size_kb": 1024 00:31:04.269 } 00:31:04.269 }, 00:31:04.269 { 00:31:04.269 "method": "bdev_iscsi_set_options", 00:31:04.269 "params": { 00:31:04.269 "timeout_sec": 30 00:31:04.269 } 00:31:04.269 }, 00:31:04.269 { 00:31:04.269 "method": "bdev_nvme_set_options", 00:31:04.269 "params": { 00:31:04.269 "action_on_timeout": "none", 00:31:04.269 "allow_accel_sequence": false, 00:31:04.269 "arbitration_burst": 0, 00:31:04.269 "bdev_retry_count": 3, 00:31:04.269 "ctrlr_loss_timeout_sec": 0, 00:31:04.269 "delay_cmd_submit": true, 00:31:04.269 "dhchap_dhgroups": [ 00:31:04.269 "null", 00:31:04.269 "ffdhe2048", 00:31:04.269 "ffdhe3072", 00:31:04.269 "ffdhe4096", 00:31:04.269 "ffdhe6144", 00:31:04.269 "ffdhe8192" 00:31:04.269 ], 00:31:04.269 "dhchap_digests": [ 00:31:04.269 "sha256", 00:31:04.269 "sha384", 00:31:04.269 "sha512" 00:31:04.269 ], 00:31:04.269 "disable_auto_failback": false, 00:31:04.269 "fast_io_fail_timeout_sec": 0, 00:31:04.269 "generate_uuids": false, 00:31:04.269 "high_priority_weight": 0, 00:31:04.269 "io_path_stat": false, 00:31:04.269 "io_queue_requests": 512, 00:31:04.269 "keep_alive_timeout_ms": 10000, 00:31:04.269 "low_priority_weight": 0, 00:31:04.269 "medium_priority_weight": 0, 00:31:04.269 "nvme_adminq_poll_period_us": 10000, 00:31:04.269 "nvme_error_stat": false, 00:31:04.269 "nvme_ioq_poll_period_us": 0, 00:31:04.269 "rdma_cm_event_timeout_ms": 0, 00:31:04.269 "rdma_max_cq_size": 0, 00:31:04.269 "rdma_srq_size": 0, 00:31:04.269 "reconnect_delay_sec": 0, 00:31:04.269 "timeout_admin_us": 0, 00:31:04.269 "timeout_us": 0, 00:31:04.269 "transport_ack_timeout": 0, 00:31:04.269 "transport_retry_count": 4, 00:31:04.269 "transport_tos": 0 00:31:04.269 } 00:31:04.269 }, 00:31:04.269 { 00:31:04.269 "method": "bdev_nvme_attach_controller", 00:31:04.269 "params": { 00:31:04.270 "adrfam": "IPv4", 00:31:04.270 "ctrlr_loss_timeout_sec": 0, 00:31:04.270 "ddgst": false, 00:31:04.270 "fast_io_fail_timeout_sec": 0, 00:31:04.270 "hdgst": false, 00:31:04.270 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:04.270 "name": "nvme0", 00:31:04.270 "prchk_guard": false, 00:31:04.270 "prchk_reftag": false, 00:31:04.270 "psk": "key0", 00:31:04.270 "reconnect_delay_sec": 0, 00:31:04.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:04.270 "traddr": "127.0.0.1", 00:31:04.270 "trsvcid": "4420", 00:31:04.270 "trtype": "TCP" 00:31:04.270 } 00:31:04.270 }, 00:31:04.270 { 00:31:04.270 "method": "bdev_nvme_set_hotplug", 00:31:04.270 "params": { 00:31:04.270 "enable": false, 00:31:04.270 "period_us": 100000 00:31:04.270 } 00:31:04.270 }, 00:31:04.270 { 00:31:04.270 "method": "bdev_wait_for_examine" 00:31:04.270 } 00:31:04.270 ] 00:31:04.270 }, 00:31:04.270 { 00:31:04.270 "subsystem": "nbd", 00:31:04.270 "config": [] 00:31:04.270 } 00:31:04.270 ] 00:31:04.270 }' 00:31:04.270 14:49:16 keyring_file -- keyring/file.sh@114 -- # killprocess 119074 00:31:04.270 14:49:16 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 119074 ']' 00:31:04.270 14:49:16 keyring_file -- common/autotest_common.sh@952 -- # kill -0 119074 00:31:04.270 14:49:16 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:04.270 14:49:16 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:04.270 14:49:16 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119074 00:31:04.270 killing process with pid 119074 00:31:04.270 Received shutdown signal, test time was about 1.000000 seconds 00:31:04.270 00:31:04.270 Latency(us) 00:31:04.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.270 =================================================================================================================== 00:31:04.270 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:04.270 14:49:16 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:04.270 14:49:16 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:04.270 14:49:16 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119074' 00:31:04.270 14:49:16 keyring_file -- common/autotest_common.sh@967 -- # kill 119074 00:31:04.270 14:49:16 keyring_file -- common/autotest_common.sh@972 -- # wait 119074 00:31:04.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:04.530 14:49:16 keyring_file -- keyring/file.sh@117 -- # bperfpid=119540 00:31:04.530 14:49:16 keyring_file -- keyring/file.sh@119 -- # waitforlisten 119540 /var/tmp/bperf.sock 00:31:04.530 14:49:16 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:04.530 14:49:16 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 119540 ']' 00:31:04.530 14:49:16 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:04.530 "subsystems": [ 00:31:04.530 { 00:31:04.530 "subsystem": "keyring", 00:31:04.530 "config": [ 00:31:04.530 { 00:31:04.530 "method": "keyring_file_add_key", 00:31:04.530 "params": { 00:31:04.530 "name": "key0", 00:31:04.530 "path": "/tmp/tmp.ggZKZvQJye" 00:31:04.530 } 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "method": "keyring_file_add_key", 00:31:04.530 "params": { 00:31:04.530 "name": "key1", 00:31:04.530 "path": "/tmp/tmp.ieIaBxj1sj" 00:31:04.530 } 00:31:04.530 } 00:31:04.530 ] 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "subsystem": "iobuf", 00:31:04.530 "config": [ 00:31:04.530 { 00:31:04.530 "method": "iobuf_set_options", 00:31:04.530 "params": { 00:31:04.530 "large_bufsize": 135168, 00:31:04.530 "large_pool_count": 1024, 00:31:04.530 "small_bufsize": 8192, 00:31:04.530 "small_pool_count": 8192 00:31:04.530 } 00:31:04.530 } 00:31:04.530 ] 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "subsystem": "sock", 00:31:04.530 "config": [ 00:31:04.530 { 00:31:04.530 "method": "sock_set_default_impl", 00:31:04.530 "params": { 00:31:04.530 "impl_name": "posix" 00:31:04.530 } 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "method": "sock_impl_set_options", 00:31:04.530 "params": { 00:31:04.530 "enable_ktls": false, 00:31:04.530 "enable_placement_id": 0, 00:31:04.530 "enable_quickack": false, 00:31:04.530 "enable_recv_pipe": true, 00:31:04.530 "enable_zerocopy_send_client": false, 00:31:04.530 "enable_zerocopy_send_server": true, 00:31:04.530 "impl_name": "ssl", 00:31:04.530 "recv_buf_size": 4096, 00:31:04.530 "send_buf_size": 4096, 00:31:04.530 "tls_version": 0, 00:31:04.530 "zerocopy_threshold": 0 00:31:04.530 } 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "method": "sock_impl_set_options", 00:31:04.530 "params": { 00:31:04.530 "enable_ktls": false, 00:31:04.530 "enable_placement_id": 0, 00:31:04.530 "enable_quickack": false, 00:31:04.530 "enable_recv_pipe": true, 00:31:04.530 "enable_zerocopy_send_client": false, 00:31:04.530 "enable_zerocopy_send_server": true, 00:31:04.530 "impl_name": "posix", 00:31:04.530 "recv_buf_size": 2097152, 00:31:04.530 "send_buf_size": 2097152, 00:31:04.530 "tls_version": 0, 00:31:04.530 "zerocopy_threshold": 0 00:31:04.530 } 00:31:04.530 } 00:31:04.530 ] 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "subsystem": "vmd", 00:31:04.530 "config": [] 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "subsystem": "accel", 00:31:04.530 "config": [ 00:31:04.530 { 00:31:04.530 "method": "accel_set_options", 00:31:04.530 "params": { 00:31:04.530 "buf_count": 2048, 00:31:04.530 "large_cache_size": 16, 00:31:04.530 "sequence_count": 2048, 00:31:04.530 "small_cache_size": 128, 00:31:04.530 "task_count": 2048 00:31:04.530 } 00:31:04.530 } 00:31:04.530 ] 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "subsystem": "bdev", 00:31:04.530 "config": [ 00:31:04.530 { 00:31:04.530 "method": "bdev_set_options", 00:31:04.530 "params": { 00:31:04.530 "bdev_auto_examine": true, 00:31:04.530 "bdev_io_cache_size": 256, 00:31:04.530 "bdev_io_pool_size": 65535, 00:31:04.530 "iobuf_large_cache_size": 16, 00:31:04.530 "iobuf_small_cache_size": 128 00:31:04.530 } 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "method": "bdev_raid_set_options", 00:31:04.530 "params": { 00:31:04.530 "process_window_size_kb": 1024 00:31:04.530 } 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "method": "bdev_iscsi_set_options", 00:31:04.530 "params": { 00:31:04.530 "timeout_sec": 30 00:31:04.530 } 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "method": "bdev_nvme_set_options", 00:31:04.530 "params": { 00:31:04.530 "action_on_timeout": "none", 00:31:04.530 "allow_accel_sequence": false, 00:31:04.530 "arbitration_burst": 0, 00:31:04.530 "bdev_retry_count": 3, 00:31:04.530 "ctrlr_loss_timeout_sec": 0, 00:31:04.530 "delay_cmd_submit": true, 00:31:04.530 "dhchap_dhgroups": [ 00:31:04.530 "null", 00:31:04.530 "ffdhe2048", 00:31:04.530 "ffdhe3072", 00:31:04.530 "ffdhe4096", 00:31:04.530 "ffdhe6144", 00:31:04.530 "ffdhe8192" 00:31:04.530 ], 00:31:04.530 "dhchap_digests": [ 00:31:04.530 "sha256", 00:31:04.530 "sha384", 00:31:04.530 "sha512" 00:31:04.530 ], 00:31:04.530 "disable_auto_failback": false, 00:31:04.530 "fast_io_fail_timeout_sec": 0, 00:31:04.530 "generate_uuids": false, 00:31:04.530 "high_priority_weight": 0, 00:31:04.530 "io_path_stat": false, 00:31:04.530 "io_queue_requests": 512, 00:31:04.530 "keep_alive_timeout_ms": 10000, 00:31:04.530 "low_priority_weight": 0, 00:31:04.530 "medium_priority_weight": 0, 00:31:04.530 "nvme_adminq_poll_period_us": 10000, 00:31:04.530 "nvme_error_stat": false, 00:31:04.530 "nvme_ioq_poll_period_us": 0, 00:31:04.530 "rdma_cm_event_timeout_ms": 0, 00:31:04.530 "rdma_max_cq_size": 0, 00:31:04.530 "rdma_srq_size": 0, 00:31:04.530 "reconnect_delay_sec": 0, 00:31:04.530 "timeout_admin_us": 0, 00:31:04.530 "timeout_us": 0, 00:31:04.530 "transport_ack_timeout": 0, 00:31:04.530 "transport_retry_count": 4, 00:31:04.530 "transport_tos": 0 00:31:04.530 } 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "method": "bdev_nvme_attach_controller", 00:31:04.530 "params": { 00:31:04.530 "adrfam": "IPv4", 00:31:04.530 "ctrlr_loss_timeout_sec": 0, 00:31:04.530 "ddgst": false, 00:31:04.530 "fast_io_fail_timeout_sec": 0, 00:31:04.530 "hdgst": false, 00:31:04.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:04.530 "name": "nvme0", 00:31:04.530 "prchk_guard": false, 00:31:04.530 "prchk_reftag": false, 00:31:04.530 "psk": "key0", 00:31:04.530 "reconnect_delay_sec": 0, 00:31:04.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:04.530 "traddr": "127.0.0.1", 00:31:04.530 "trsvcid": "4420", 00:31:04.530 "trtype": "TCP" 00:31:04.530 } 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "method": "bdev_nvme_set_hotplug", 00:31:04.530 "params": { 00:31:04.530 "enable": false, 00:31:04.530 "period_us": 100000 00:31:04.530 } 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "method": "bdev_wait_for_examine" 00:31:04.530 } 00:31:04.530 ] 00:31:04.530 }, 00:31:04.530 { 00:31:04.530 "subsystem": "nbd", 00:31:04.530 "config": [] 00:31:04.530 } 00:31:04.530 ] 00:31:04.530 }' 00:31:04.530 14:49:16 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:04.530 14:49:16 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:04.531 14:49:16 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:04.531 14:49:16 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:04.531 14:49:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:04.531 [2024-07-10 14:49:16.673120] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:31:04.531 [2024-07-10 14:49:16.673217] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119540 ] 00:31:04.531 [2024-07-10 14:49:16.794133] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:04.531 [2024-07-10 14:49:16.814278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.790 [2024-07-10 14:49:16.850231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.790 [2024-07-10 14:49:16.988935] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:05.725 14:49:17 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:05.725 14:49:17 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:05.725 14:49:17 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:05.725 14:49:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:05.725 14:49:17 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:05.725 14:49:17 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:05.725 14:49:18 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:05.725 14:49:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:05.725 14:49:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:05.725 14:49:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:05.725 14:49:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:05.725 14:49:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.291 14:49:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:06.291 14:49:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:06.291 14:49:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:06.291 14:49:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:06.291 14:49:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:06.291 14:49:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.291 14:49:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:06.549 14:49:18 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:06.549 14:49:18 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:06.549 14:49:18 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:06.549 14:49:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:06.807 14:49:18 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:06.807 14:49:18 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:06.807 14:49:18 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ggZKZvQJye /tmp/tmp.ieIaBxj1sj 00:31:06.807 14:49:18 keyring_file -- keyring/file.sh@20 -- # killprocess 119540 00:31:06.807 14:49:18 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 119540 ']' 00:31:06.807 14:49:18 keyring_file -- common/autotest_common.sh@952 -- # kill -0 119540 00:31:06.807 14:49:18 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:06.807 14:49:18 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:06.807 14:49:18 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119540 00:31:06.807 killing process with pid 119540 00:31:06.807 Received shutdown signal, test time was about 1.000000 seconds 00:31:06.807 00:31:06.807 Latency(us) 00:31:06.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.807 =================================================================================================================== 00:31:06.807 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:06.807 14:49:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:06.807 14:49:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:06.807 14:49:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119540' 00:31:06.807 14:49:19 keyring_file -- common/autotest_common.sh@967 -- # kill 119540 00:31:06.807 14:49:19 keyring_file -- common/autotest_common.sh@972 -- # wait 119540 00:31:07.064 14:49:19 keyring_file -- keyring/file.sh@21 -- # killprocess 119040 00:31:07.064 14:49:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 119040 ']' 00:31:07.064 14:49:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 119040 00:31:07.064 14:49:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:07.064 14:49:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:07.064 14:49:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119040 00:31:07.064 killing process with pid 119040 00:31:07.064 14:49:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:07.064 14:49:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:07.064 14:49:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119040' 00:31:07.064 14:49:19 keyring_file -- common/autotest_common.sh@967 -- # kill 119040 00:31:07.064 [2024-07-10 14:49:19.175465] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:07.064 14:49:19 keyring_file -- common/autotest_common.sh@972 -- # wait 119040 00:31:07.322 00:31:07.322 real 0m16.385s 00:31:07.322 user 0m42.158s 00:31:07.322 sys 0m3.103s 00:31:07.322 14:49:19 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:07.322 14:49:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:07.322 ************************************ 00:31:07.322 END TEST keyring_file 00:31:07.322 ************************************ 00:31:07.322 14:49:19 -- common/autotest_common.sh@1142 -- # return 0 00:31:07.322 14:49:19 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:31:07.322 14:49:19 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:07.322 14:49:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:07.322 14:49:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:07.322 14:49:19 -- common/autotest_common.sh@10 -- # set +x 00:31:07.322 ************************************ 00:31:07.322 START TEST keyring_linux 00:31:07.322 ************************************ 00:31:07.322 14:49:19 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:07.322 * Looking for test storage... 00:31:07.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:07.322 14:49:19 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:07.322 14:49:19 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:07.322 14:49:19 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:07.322 14:49:19 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.322 14:49:19 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.322 14:49:19 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.322 14:49:19 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.322 14:49:19 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29002397-6866-4d44-9964-2c83ec2680a9 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29002397-6866-4d44-9964-2c83ec2680a9 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:07.323 14:49:19 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.323 14:49:19 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.323 14:49:19 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.323 14:49:19 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.323 14:49:19 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.323 14:49:19 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.323 14:49:19 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:07.323 14:49:19 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:07.323 14:49:19 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:07.323 14:49:19 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:07.323 14:49:19 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:07.323 14:49:19 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:07.323 14:49:19 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:07.323 14:49:19 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:07.323 14:49:19 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:07.323 14:49:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:07.323 14:49:19 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:07.323 14:49:19 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:07.323 14:49:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:07.323 14:49:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:07.323 14:49:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:07.323 14:49:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:07.323 14:49:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:07.581 /tmp/:spdk-test:key0 00:31:07.581 14:49:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:07.581 14:49:19 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:07.581 14:49:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:07.581 14:49:19 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:07.581 14:49:19 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:07.581 14:49:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:07.581 14:49:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:07.581 14:49:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:07.581 14:49:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:07.581 14:49:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:07.581 14:49:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:07.581 14:49:19 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:07.581 14:49:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:07.581 14:49:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:07.581 14:49:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:07.581 /tmp/:spdk-test:key1 00:31:07.581 14:49:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:07.581 14:49:19 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=119690 00:31:07.581 14:49:19 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:07.581 14:49:19 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 119690 00:31:07.581 14:49:19 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 119690 ']' 00:31:07.581 14:49:19 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.581 14:49:19 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:07.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.581 14:49:19 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.581 14:49:19 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:07.581 14:49:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:07.581 [2024-07-10 14:49:19.723861] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:31:07.581 [2024-07-10 14:49:19.724466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119690 ] 00:31:07.581 [2024-07-10 14:49:19.846714] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:07.581 [2024-07-10 14:49:19.861888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.839 [2024-07-10 14:49:19.906631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.772 14:49:20 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:08.772 14:49:20 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:08.772 14:49:20 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:08.772 14:49:20 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.772 14:49:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:08.772 [2024-07-10 14:49:20.728090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.772 null0 00:31:08.772 [2024-07-10 14:49:20.760049] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:08.772 [2024-07-10 14:49:20.760255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:08.772 14:49:20 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.772 14:49:20 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:08.772 774214165 00:31:08.772 14:49:20 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:08.772 651696181 00:31:08.772 14:49:20 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=119728 00:31:08.772 14:49:20 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 119728 /var/tmp/bperf.sock 00:31:08.772 14:49:20 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:08.772 14:49:20 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 119728 ']' 00:31:08.772 14:49:20 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:08.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:08.772 14:49:20 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:08.772 14:49:20 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:08.772 14:49:20 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:08.772 14:49:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:08.772 [2024-07-10 14:49:20.840936] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.07.0-rc1 initialization... 00:31:08.772 [2024-07-10 14:49:20.841027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119728 ] 00:31:08.772 [2024-07-10 14:49:20.964091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:08.772 [2024-07-10 14:49:20.984787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.772 [2024-07-10 14:49:21.027086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.029 14:49:21 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:09.029 14:49:21 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:09.029 14:49:21 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:09.029 14:49:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:09.286 14:49:21 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:09.286 14:49:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:09.544 14:49:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:09.544 14:49:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:09.801 [2024-07-10 14:49:21.891979] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:09.801 nvme0n1 00:31:09.801 14:49:21 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:09.801 14:49:21 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:09.801 14:49:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:09.801 14:49:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:09.801 14:49:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:09.801 14:49:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:10.059 14:49:22 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:10.059 14:49:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:10.059 14:49:22 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:10.059 14:49:22 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:10.059 14:49:22 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:10.059 14:49:22 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:10.059 14:49:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:10.318 14:49:22 keyring_linux -- keyring/linux.sh@25 -- # sn=774214165 00:31:10.576 14:49:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:10.576 14:49:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:10.576 14:49:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 774214165 == \7\7\4\2\1\4\1\6\5 ]] 00:31:10.576 14:49:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 774214165 00:31:10.576 14:49:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:10.576 14:49:22 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:10.576 Running I/O for 1 seconds... 00:31:11.602 00:31:11.602 Latency(us) 00:31:11.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.602 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:11.602 nvme0n1 : 1.01 11410.32 44.57 0.00 0.00 11149.34 6196.13 16205.27 00:31:11.602 =================================================================================================================== 00:31:11.602 Total : 11410.32 44.57 0.00 0.00 11149.34 6196.13 16205.27 00:31:11.602 0 00:31:11.602 14:49:23 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:11.602 14:49:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:11.860 14:49:24 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:11.860 14:49:24 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:11.860 14:49:24 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:11.860 14:49:24 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:11.860 14:49:24 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:11.860 14:49:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:12.119 14:49:24 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:12.119 14:49:24 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:12.119 14:49:24 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:12.119 14:49:24 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:12.119 14:49:24 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:31:12.119 14:49:24 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:12.119 14:49:24 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:12.378 14:49:24 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.378 14:49:24 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:12.378 14:49:24 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.378 14:49:24 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:12.378 14:49:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:12.378 [2024-07-10 14:49:24.634228] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:12.378 [2024-07-10 14:49:24.634522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dde1b0 (107): Transport endpoint is not connected 00:31:12.378 [2024-07-10 14:49:24.635513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dde1b0 (9): Bad file descriptor 00:31:12.378 [2024-07-10 14:49:24.636510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:12.378 [2024-07-10 14:49:24.636533] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:12.378 [2024-07-10 14:49:24.636542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:12.378 2024/07/10 14:49:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:12.378 request: 00:31:12.378 { 00:31:12.378 "method": "bdev_nvme_attach_controller", 00:31:12.378 "params": { 00:31:12.378 "name": "nvme0", 00:31:12.378 "trtype": "tcp", 00:31:12.378 "traddr": "127.0.0.1", 00:31:12.378 "adrfam": "ipv4", 00:31:12.378 "trsvcid": "4420", 00:31:12.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:12.378 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:12.378 "prchk_reftag": false, 00:31:12.378 "prchk_guard": false, 00:31:12.378 "hdgst": false, 00:31:12.378 "ddgst": false, 00:31:12.378 "psk": ":spdk-test:key1" 00:31:12.378 } 00:31:12.378 } 00:31:12.378 Got JSON-RPC error response 00:31:12.378 GoRPCClient: error on JSON-RPC call 00:31:12.378 14:49:24 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:31:12.378 14:49:24 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:12.378 14:49:24 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:12.378 14:49:24 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:12.378 14:49:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:12.378 14:49:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:12.378 14:49:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:12.378 14:49:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:12.378 14:49:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:12.378 14:49:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:12.378 14:49:24 keyring_linux -- keyring/linux.sh@33 -- # sn=774214165 00:31:12.378 14:49:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 774214165 00:31:12.378 1 links removed 00:31:12.378 14:49:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:12.378 14:49:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:12.378 14:49:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:12.638 14:49:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:12.638 14:49:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:12.638 14:49:24 keyring_linux -- keyring/linux.sh@33 -- # sn=651696181 00:31:12.638 14:49:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 651696181 00:31:12.638 1 links removed 00:31:12.638 14:49:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 119728 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 119728 ']' 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 119728 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119728 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:12.638 killing process with pid 119728 00:31:12.638 Received shutdown signal, test time was about 1.000000 seconds 00:31:12.638 00:31:12.638 Latency(us) 00:31:12.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.638 =================================================================================================================== 00:31:12.638 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119728' 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@967 -- # kill 119728 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@972 -- # wait 119728 00:31:12.638 14:49:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 119690 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 119690 ']' 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 119690 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119690 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:12.638 killing process with pid 119690 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119690' 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@967 -- # kill 119690 00:31:12.638 14:49:24 keyring_linux -- common/autotest_common.sh@972 -- # wait 119690 00:31:12.898 00:31:12.898 real 0m5.630s 00:31:12.898 user 0m11.182s 00:31:12.898 sys 0m1.488s 00:31:12.898 14:49:25 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:12.898 ************************************ 00:31:12.898 END TEST keyring_linux 00:31:12.898 14:49:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:12.898 ************************************ 00:31:12.898 14:49:25 -- common/autotest_common.sh@1142 -- # return 0 00:31:12.898 14:49:25 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:12.898 14:49:25 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:12.898 14:49:25 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:12.898 14:49:25 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:12.898 14:49:25 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:31:12.898 14:49:25 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:12.898 14:49:25 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:12.898 14:49:25 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:12.898 14:49:25 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:12.898 14:49:25 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:12.898 14:49:25 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:12.898 14:49:25 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:12.898 14:49:25 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:12.898 14:49:25 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:12.898 14:49:25 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:12.898 14:49:25 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:12.898 14:49:25 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:12.898 14:49:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:12.898 14:49:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.898 14:49:25 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:12.898 14:49:25 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:12.898 14:49:25 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:12.898 14:49:25 -- common/autotest_common.sh@10 -- # set +x 00:31:14.801 INFO: APP EXITING 00:31:14.801 INFO: killing all VMs 00:31:14.801 INFO: killing vhost app 00:31:14.801 INFO: EXIT DONE 00:31:15.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:15.319 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:15.319 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:15.886 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:15.886 Cleaning 00:31:15.886 Removing: /var/run/dpdk/spdk0/config 00:31:15.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:15.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:15.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:15.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:15.886 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:15.886 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:15.886 Removing: /var/run/dpdk/spdk1/config 00:31:15.886 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:15.886 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:15.886 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:15.886 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:15.886 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:15.886 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:15.886 Removing: /var/run/dpdk/spdk2/config 00:31:15.886 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:15.886 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:15.886 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:15.886 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:15.886 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:15.886 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:15.886 Removing: /var/run/dpdk/spdk3/config 00:31:15.886 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:15.886 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:15.886 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:15.886 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:15.886 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:15.886 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:16.144 Removing: /var/run/dpdk/spdk4/config 00:31:16.144 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:16.144 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:16.144 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:16.144 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:16.144 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:16.144 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:16.144 Removing: /dev/shm/nvmf_trace.0 00:31:16.144 Removing: /dev/shm/spdk_tgt_trace.pid74464 00:31:16.144 Removing: /var/run/dpdk/spdk0 00:31:16.144 Removing: /var/run/dpdk/spdk1 00:31:16.144 Removing: /var/run/dpdk/spdk2 00:31:16.144 Removing: /var/run/dpdk/spdk3 00:31:16.144 Removing: /var/run/dpdk/spdk4 00:31:16.144 Removing: /var/run/dpdk/spdk_pid100309 00:31:16.144 Removing: /var/run/dpdk/spdk_pid100413 00:31:16.144 Removing: /var/run/dpdk/spdk_pid100546 00:31:16.144 Removing: /var/run/dpdk/spdk_pid100578 00:31:16.144 Removing: /var/run/dpdk/spdk_pid100603 00:31:16.144 Removing: /var/run/dpdk/spdk_pid100631 00:31:16.144 Removing: /var/run/dpdk/spdk_pid100761 00:31:16.144 Removing: /var/run/dpdk/spdk_pid100892 00:31:16.144 Removing: /var/run/dpdk/spdk_pid101113 00:31:16.144 Removing: /var/run/dpdk/spdk_pid101218 00:31:16.144 Removing: /var/run/dpdk/spdk_pid101466 00:31:16.144 Removing: /var/run/dpdk/spdk_pid101572 00:31:16.144 Removing: /var/run/dpdk/spdk_pid101688 00:31:16.144 Removing: /var/run/dpdk/spdk_pid102024 00:31:16.144 Removing: /var/run/dpdk/spdk_pid102406 00:31:16.144 Removing: /var/run/dpdk/spdk_pid102412 00:31:16.144 Removing: /var/run/dpdk/spdk_pid104597 00:31:16.144 Removing: /var/run/dpdk/spdk_pid104882 00:31:16.144 Removing: /var/run/dpdk/spdk_pid105351 00:31:16.144 Removing: /var/run/dpdk/spdk_pid105354 00:31:16.144 Removing: /var/run/dpdk/spdk_pid105696 00:31:16.144 Removing: /var/run/dpdk/spdk_pid105716 00:31:16.144 Removing: /var/run/dpdk/spdk_pid105731 00:31:16.144 Removing: /var/run/dpdk/spdk_pid105768 00:31:16.144 Removing: /var/run/dpdk/spdk_pid105777 00:31:16.144 Removing: /var/run/dpdk/spdk_pid105916 00:31:16.144 Removing: /var/run/dpdk/spdk_pid105924 00:31:16.144 Removing: /var/run/dpdk/spdk_pid106031 00:31:16.144 Removing: /var/run/dpdk/spdk_pid106034 00:31:16.144 Removing: /var/run/dpdk/spdk_pid106138 00:31:16.145 Removing: /var/run/dpdk/spdk_pid106146 00:31:16.145 Removing: /var/run/dpdk/spdk_pid106615 00:31:16.145 Removing: /var/run/dpdk/spdk_pid106664 00:31:16.145 Removing: /var/run/dpdk/spdk_pid106810 00:31:16.145 Removing: /var/run/dpdk/spdk_pid106940 00:31:16.145 Removing: /var/run/dpdk/spdk_pid107317 00:31:16.145 Removing: /var/run/dpdk/spdk_pid107548 00:31:16.145 Removing: /var/run/dpdk/spdk_pid108026 00:31:16.145 Removing: /var/run/dpdk/spdk_pid108601 00:31:16.145 Removing: /var/run/dpdk/spdk_pid109922 00:31:16.145 Removing: /var/run/dpdk/spdk_pid110493 00:31:16.145 Removing: /var/run/dpdk/spdk_pid110495 00:31:16.145 Removing: /var/run/dpdk/spdk_pid112441 00:31:16.145 Removing: /var/run/dpdk/spdk_pid112526 00:31:16.145 Removing: /var/run/dpdk/spdk_pid112597 00:31:16.145 Removing: /var/run/dpdk/spdk_pid112688 00:31:16.145 Removing: /var/run/dpdk/spdk_pid112815 00:31:16.145 Removing: /var/run/dpdk/spdk_pid112904 00:31:16.145 Removing: /var/run/dpdk/spdk_pid112971 00:31:16.145 Removing: /var/run/dpdk/spdk_pid113048 00:31:16.145 Removing: /var/run/dpdk/spdk_pid113360 00:31:16.145 Removing: /var/run/dpdk/spdk_pid114023 00:31:16.145 Removing: /var/run/dpdk/spdk_pid115343 00:31:16.145 Removing: /var/run/dpdk/spdk_pid115530 00:31:16.145 Removing: /var/run/dpdk/spdk_pid115797 00:31:16.145 Removing: /var/run/dpdk/spdk_pid116075 00:31:16.145 Removing: /var/run/dpdk/spdk_pid116592 00:31:16.145 Removing: /var/run/dpdk/spdk_pid116597 00:31:16.145 Removing: /var/run/dpdk/spdk_pid116938 00:31:16.145 Removing: /var/run/dpdk/spdk_pid117092 00:31:16.145 Removing: /var/run/dpdk/spdk_pid117244 00:31:16.145 Removing: /var/run/dpdk/spdk_pid117336 00:31:16.145 Removing: /var/run/dpdk/spdk_pid117470 00:31:16.145 Removing: /var/run/dpdk/spdk_pid117578 00:31:16.145 Removing: /var/run/dpdk/spdk_pid118240 00:31:16.145 Removing: /var/run/dpdk/spdk_pid118270 00:31:16.145 Removing: /var/run/dpdk/spdk_pid118311 00:31:16.145 Removing: /var/run/dpdk/spdk_pid118557 00:31:16.145 Removing: /var/run/dpdk/spdk_pid118593 00:31:16.145 Removing: /var/run/dpdk/spdk_pid118624 00:31:16.145 Removing: /var/run/dpdk/spdk_pid119040 00:31:16.145 Removing: /var/run/dpdk/spdk_pid119074 00:31:16.145 Removing: /var/run/dpdk/spdk_pid119540 00:31:16.403 Removing: /var/run/dpdk/spdk_pid119690 00:31:16.403 Removing: /var/run/dpdk/spdk_pid119728 00:31:16.403 Removing: /var/run/dpdk/spdk_pid74330 00:31:16.403 Removing: /var/run/dpdk/spdk_pid74464 00:31:16.403 Removing: /var/run/dpdk/spdk_pid74706 00:31:16.403 Removing: /var/run/dpdk/spdk_pid74801 00:31:16.403 Removing: /var/run/dpdk/spdk_pid74821 00:31:16.403 Removing: /var/run/dpdk/spdk_pid74938 00:31:16.403 Removing: /var/run/dpdk/spdk_pid74949 00:31:16.403 Removing: /var/run/dpdk/spdk_pid75067 00:31:16.403 Removing: /var/run/dpdk/spdk_pid75344 00:31:16.403 Removing: /var/run/dpdk/spdk_pid75514 00:31:16.403 Removing: /var/run/dpdk/spdk_pid75596 00:31:16.403 Removing: /var/run/dpdk/spdk_pid75669 00:31:16.403 Removing: /var/run/dpdk/spdk_pid75745 00:31:16.403 Removing: /var/run/dpdk/spdk_pid75778 00:31:16.403 Removing: /var/run/dpdk/spdk_pid75814 00:31:16.403 Removing: /var/run/dpdk/spdk_pid75870 00:31:16.403 Removing: /var/run/dpdk/spdk_pid75957 00:31:16.403 Removing: /var/run/dpdk/spdk_pid76574 00:31:16.403 Removing: /var/run/dpdk/spdk_pid76619 00:31:16.403 Removing: /var/run/dpdk/spdk_pid76669 00:31:16.403 Removing: /var/run/dpdk/spdk_pid76678 00:31:16.403 Removing: /var/run/dpdk/spdk_pid76738 00:31:16.403 Removing: /var/run/dpdk/spdk_pid76766 00:31:16.403 Removing: /var/run/dpdk/spdk_pid76845 00:31:16.403 Removing: /var/run/dpdk/spdk_pid76854 00:31:16.403 Removing: /var/run/dpdk/spdk_pid76910 00:31:16.403 Removing: /var/run/dpdk/spdk_pid76936 00:31:16.403 Removing: /var/run/dpdk/spdk_pid76986 00:31:16.403 Removing: /var/run/dpdk/spdk_pid76998 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77145 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77180 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77249 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77305 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77330 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77388 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77422 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77452 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77485 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77521 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77550 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77579 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77619 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77648 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77677 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77717 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77746 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77775 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77815 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77844 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77873 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77915 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77947 00:31:16.403 Removing: /var/run/dpdk/spdk_pid77985 00:31:16.403 Removing: /var/run/dpdk/spdk_pid78019 00:31:16.403 Removing: /var/run/dpdk/spdk_pid78049 00:31:16.403 Removing: /var/run/dpdk/spdk_pid78119 00:31:16.403 Removing: /var/run/dpdk/spdk_pid78225 00:31:16.403 Removing: /var/run/dpdk/spdk_pid78601 00:31:16.403 Removing: /var/run/dpdk/spdk_pid85193 00:31:16.403 Removing: /var/run/dpdk/spdk_pid85491 00:31:16.403 Removing: /var/run/dpdk/spdk_pid87894 00:31:16.403 Removing: /var/run/dpdk/spdk_pid88256 00:31:16.403 Removing: /var/run/dpdk/spdk_pid88473 00:31:16.403 Removing: /var/run/dpdk/spdk_pid88524 00:31:16.403 Removing: /var/run/dpdk/spdk_pid89127 00:31:16.403 Removing: /var/run/dpdk/spdk_pid89532 00:31:16.403 Removing: /var/run/dpdk/spdk_pid89578 00:31:16.403 Removing: /var/run/dpdk/spdk_pid89913 00:31:16.403 Removing: /var/run/dpdk/spdk_pid90426 00:31:16.403 Removing: /var/run/dpdk/spdk_pid90861 00:31:16.403 Removing: /var/run/dpdk/spdk_pid91753 00:31:16.403 Removing: /var/run/dpdk/spdk_pid92711 00:31:16.403 Removing: /var/run/dpdk/spdk_pid92822 00:31:16.403 Removing: /var/run/dpdk/spdk_pid92884 00:31:16.403 Removing: /var/run/dpdk/spdk_pid94308 00:31:16.403 Removing: /var/run/dpdk/spdk_pid94516 00:31:16.403 Removing: /var/run/dpdk/spdk_pid99890 00:31:16.403 Clean 00:31:16.660 14:49:28 -- common/autotest_common.sh@1451 -- # return 0 00:31:16.660 14:49:28 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:16.660 14:49:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:16.660 14:49:28 -- common/autotest_common.sh@10 -- # set +x 00:31:16.660 14:49:28 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:16.660 14:49:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:16.660 14:49:28 -- common/autotest_common.sh@10 -- # set +x 00:31:16.660 14:49:28 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:16.660 14:49:28 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:16.660 14:49:28 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:16.660 14:49:28 -- spdk/autotest.sh@391 -- # hash lcov 00:31:16.660 14:49:28 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:16.660 14:49:28 -- spdk/autotest.sh@393 -- # hostname 00:31:16.660 14:49:28 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:16.918 geninfo: WARNING: invalid characters removed from testname! 00:31:49.019 14:49:56 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:49.019 14:50:00 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:50.921 14:50:02 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:54.203 14:50:05 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:56.736 14:50:08 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:00.023 14:50:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:03.302 14:50:14 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:03.302 14:50:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:03.302 14:50:14 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:03.302 14:50:14 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.302 14:50:14 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.302 14:50:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.302 14:50:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.302 14:50:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.302 14:50:14 -- paths/export.sh@5 -- $ export PATH 00:32:03.302 14:50:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.302 14:50:14 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:03.302 14:50:14 -- common/autobuild_common.sh@444 -- $ date +%s 00:32:03.302 14:50:14 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720623014.XXXXXX 00:32:03.302 14:50:14 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720623014.wlWCUH 00:32:03.302 14:50:14 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:32:03.302 14:50:14 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:32:03.302 14:50:14 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:32:03.302 14:50:14 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:32:03.302 14:50:14 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:03.302 14:50:14 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:03.302 14:50:14 -- common/autobuild_common.sh@460 -- $ get_config_params 00:32:03.302 14:50:14 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:32:03.302 14:50:14 -- common/autotest_common.sh@10 -- $ set +x 00:32:03.302 14:50:14 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:32:03.302 14:50:14 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:32:03.302 14:50:14 -- pm/common@17 -- $ local monitor 00:32:03.302 14:50:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:03.302 14:50:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:03.302 14:50:14 -- pm/common@25 -- $ sleep 1 00:32:03.302 14:50:14 -- pm/common@21 -- $ date +%s 00:32:03.302 14:50:14 -- pm/common@21 -- $ date +%s 00:32:03.302 14:50:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720623014 00:32:03.302 14:50:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720623014 00:32:03.302 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720623014_collect-vmstat.pm.log 00:32:03.302 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720623014_collect-cpu-load.pm.log 00:32:03.869 14:50:15 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:32:03.869 14:50:15 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:32:03.869 14:50:15 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:32:03.869 14:50:15 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:03.869 14:50:15 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:32:03.869 14:50:15 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:03.869 14:50:15 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:03.869 14:50:15 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:03.869 14:50:15 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:03.869 14:50:15 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:03.869 14:50:16 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:03.869 14:50:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:03.869 14:50:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:03.869 14:50:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:03.869 14:50:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:03.869 14:50:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:03.869 14:50:16 -- pm/common@44 -- $ pid=121431 00:32:03.869 14:50:16 -- pm/common@50 -- $ kill -TERM 121431 00:32:03.869 14:50:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:03.869 14:50:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:03.869 14:50:16 -- pm/common@44 -- $ pid=121433 00:32:03.869 14:50:16 -- pm/common@50 -- $ kill -TERM 121433 00:32:03.869 + [[ -n 5900 ]] 00:32:03.869 + sudo kill 5900 00:32:03.878 [Pipeline] } 00:32:03.898 [Pipeline] // timeout 00:32:03.903 [Pipeline] } 00:32:03.921 [Pipeline] // stage 00:32:03.926 [Pipeline] } 00:32:03.943 [Pipeline] // catchError 00:32:03.951 [Pipeline] stage 00:32:03.953 [Pipeline] { (Stop VM) 00:32:03.964 [Pipeline] sh 00:32:04.238 + vagrant halt 00:32:08.466 ==> default: Halting domain... 00:32:13.744 [Pipeline] sh 00:32:14.022 + vagrant destroy -f 00:32:18.206 ==> default: Removing domain... 00:32:18.220 [Pipeline] sh 00:32:18.500 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:32:18.508 [Pipeline] } 00:32:18.528 [Pipeline] // stage 00:32:18.535 [Pipeline] } 00:32:18.554 [Pipeline] // dir 00:32:18.560 [Pipeline] } 00:32:18.575 [Pipeline] // wrap 00:32:18.582 [Pipeline] } 00:32:18.595 [Pipeline] // catchError 00:32:18.604 [Pipeline] stage 00:32:18.606 [Pipeline] { (Epilogue) 00:32:18.620 [Pipeline] sh 00:32:18.900 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:27.019 [Pipeline] catchError 00:32:27.021 [Pipeline] { 00:32:27.038 [Pipeline] sh 00:32:27.320 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:27.579 Artifacts sizes are good 00:32:27.588 [Pipeline] } 00:32:27.607 [Pipeline] // catchError 00:32:27.622 [Pipeline] archiveArtifacts 00:32:27.630 Archiving artifacts 00:32:27.783 [Pipeline] cleanWs 00:32:27.806 [WS-CLEANUP] Deleting project workspace... 00:32:27.806 [WS-CLEANUP] Deferred wipeout is used... 00:32:27.852 [WS-CLEANUP] done 00:32:27.854 [Pipeline] } 00:32:27.873 [Pipeline] // stage 00:32:27.882 [Pipeline] } 00:32:27.899 [Pipeline] // node 00:32:27.905 [Pipeline] End of Pipeline 00:32:27.942 Finished: SUCCESS